diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/AutoCAD for Mac M1 Free Download How to Avoid Viruses and Legal Issues.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/AutoCAD for Mac M1 Free Download How to Avoid Viruses and Legal Issues.md deleted file mode 100644 index f037c2161f5449c69ce526cc398be9a0c811f838..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/AutoCAD for Mac M1 Free Download How to Avoid Viruses and Legal Issues.md +++ /dev/null @@ -1,44 +0,0 @@ - -

How to Get AutoCAD for Mac M1 for Free

- -

AutoCAD is one of the most popular and powerful CAD software in the world, used by architects, engineers, designers, and many other professionals. But what if you have a Mac M1 computer and you want to use AutoCAD without paying a subscription fee? Is there a way to get AutoCAD for Mac M1 for free? In this article, we will show you how to do that and what are the pros and cons of using AutoCAD for Mac M1 for free.

- -

What is AutoCAD for Mac M1?

- -

AutoCAD for Mac M1 is the latest version of AutoCAD software that is compatible with the Apple M1 chip, which powers the new Mac computers such as the MacBook Air, MacBook Pro, Mac mini, and iMac. The Apple M1 chip is a powerful processor that offers faster performance, longer battery life, and better graphics than the previous Intel-based Macs. AutoCAD for Mac M1 delivers the same functionality and features as the Windows version of AutoCAD, but with a native Mac interface and optimized performance for the M1 chip.

-

autocad mac m1 free


Download Zip - https://byltly.com/2uKwEW



- -

How to Get AutoCAD for Mac M1 for Free?

- -

There are several ways to get AutoCAD for Mac M1 for free, depending on your needs and preferences. Here are some of them:

- - - -

What are the Pros and Cons of Using AutoCAD for Mac M1 for Free?

- -

Using AutoCAD for Mac M1 for free has some pros and cons that you should consider before deciding whether it is worth it or not. Some of the pros are:

- - - -

Some of the cons are:

- - - -

Conclusion

- -

AutoCAD for Mac M1 is a powerful CAD software that is compatible with the Apple M1 chip and offers fast performance, high-quality graphics, and native Mac interface. You can get AutoCAD for Mac M1 for free by

-

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Codigo De Activacion Robot Structural Analysis Professional 2013.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Codigo De Activacion Robot Structural Analysis Professional 2013.md deleted file mode 100644 index 9f00b4f3317d585ff8a037959d6a450c54cf1d6d..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Codigo De Activacion Robot Structural Analysis Professional 2013.md +++ /dev/null @@ -1,35 +0,0 @@ -
-

¿Cómo activar Robot Structural Analysis Professional 2013?

-

Robot Structural Analysis Professional 2013 es un software de Autodesk que permite realizar análisis estructurales avanzados y simulaciones de cargas dinámicas. Para poder utilizar este programa, es necesario activar una licencia válida que se obtiene al comprar el producto o al suscribirse a un plan de Autodesk.

-

Codigo De Activacion Robot Structural Analysis Professional 2013


Download Filehttps://byltly.com/2uKvFq



-

Para activar Robot Structural Analysis Professional 2013, se debe seguir estos pasos:

-
    -
  1. Instalar el software en el ordenador siguiendo las instrucciones del instalador.
  2. -
  3. Ejecutar el programa y seleccionar la opción "Activar" en la pantalla de inicio.
  4. -
  5. Ingresar el número de serie y la clave de producto que se recibieron al comprar o suscribirse al software. El número de serie tiene 12 dígitos y la clave de producto tiene 5 dígitos. Por ejemplo, el número de serie puede ser 123-45678901 y la clave de producto puede ser 547F1.
  6. -
  7. Seleccionar el método de activación que se prefiera: por Internet, por teléfono o por correo electrónico. Si se elige la opción por Internet, se debe tener una conexión a Internet activa y seguir las instrucciones en pantalla. Si se elige la opción por teléfono o por correo electrónico, se debe contactar con el servicio de atención al cliente de Autodesk y proporcionar el código de solicitud que se genera en el programa. El código de solicitud tiene 16 dígitos y se muestra en la pantalla de activación. Por ejemplo, el código de solicitud puede ser A1B2-C3D4-E5F6-G7H8.
  8. -
  9. Introducir el código de activación que se recibe del servicio de atención al cliente de Autodesk. El código de activación tiene 16 dígitos y se debe ingresar en el programa para completar la activación. Por ejemplo, el código de activación puede ser I9J0-K1L2-M3N4-O5P6.
  10. -
  11. Disfrutar del software y sus funciones.
  12. -
-

Si se tiene algún problema o duda con la activación, se puede consultar la página web de Autodesk o contactar con el soporte técnico.

- -

Robot Structural Analysis Professional 2013 es un software que ofrece múltiples funciones para el diseño y análisis de estructuras de todo tipo. Algunas de las funciones más destacadas son:

-

- -

Robot Structural Analysis Professional 2013 es un software que se actualiza constantemente para ofrecer las mejores prestaciones y funcionalidades a los usuarios. Para actualizar a una versión más reciente del software, se debe seguir estos pasos:

-
    -
  1. Acceder a la cuenta de Autodesk y verificar si se tiene una suscripción activa al software o si se puede renovar la licencia.
  2. -
  3. Descargar la versión más reciente del software desde la página web de Autodesk o desde el gestor de aplicaciones de Autodesk.
  4. -
  5. Instalar la nueva versión del software en el ordenador siguiendo las instrucciones del instalador.
  6. -
  7. Activar la nueva versión del software con el mismo número de serie y clave de producto que se usaron para la versión anterior.
  8. -
  9. Disfrutar de las nuevas funciones y mejoras del software.
  10. -
-

Si se tiene algún problema o duda con la actualización, se puede consultar la página web de Autodesk o contactar con el soporte técnico.

81aa517590
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Active Sky Next Fsx Crack Sp2 18 [HOT].md b/spaces/1gistliPinn/ChatGPT4/Examples/Active Sky Next Fsx Crack Sp2 18 [HOT].md deleted file mode 100644 index 8926221db75ec935a6907999d20c7a60e217d127..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Active Sky Next Fsx Crack Sp2 18 [HOT].md +++ /dev/null @@ -1,6 +0,0 @@ -

Active Sky Next Fsx Crack Sp2 18


Download Filehttps://imgfil.com/2uxYle



-
-Active Sky is a comprehensive weather simulation engine for FSX, P3D and now X-Plane desktop flight simulator platforms. Over 20 years of development, ...expected "by the end of the year"! Unlike solutions such as Aerosoft, OASIS, etc., Active Sky not only provides modeling tools, but also provides “full integration” (full modeling) with weather data, including visual information, maps, etc. “Currently, in our opinion, Active Sky is the most powerful and reliable weather package in the world,” says Michael Schmitt, Marketing Director of Active Sky. 8a78ff9644
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Bcm92045nmd Driver Download [REPACK].md b/spaces/1gistliPinn/ChatGPT4/Examples/Bcm92045nmd Driver Download [REPACK].md deleted file mode 100644 index 637c3e3c76d79ec70c487c6800738311398d911b..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Bcm92045nmd Driver Download [REPACK].md +++ /dev/null @@ -1,168 +0,0 @@ - ----> ServiceClient failure for DeepLeo[/ERROR]

-

If you want to learn more about the BCM92045NMD driver and other Bluetooth drivers, you can visit the official websites of your laptop manufacturer, Broadcom, or other reliable sources. You can also check out some online forums, blogs, or videos that offer more tips and tricks on how to use and optimize your Bluetooth module. You can also share your own experiences and feedback with other users who have the same device as you. By doing so, you can improve your knowledge and skills on Bluetooth technology, and enjoy its benefits more.

-

bcm92045nmd driver download


Download Zip ····· https://imgfil.com/2uy0oa



-

If you have any problems or issues with the BCM92045NMD driver or your Bluetooth module, you can contact the customer support or technical support of your laptop manufacturer, Broadcom, or the driver updater tool that you used. They can provide you with more assistance and guidance on how to solve your problems or issues. You can also check if there are any FAQs or troubleshooting guides available on their websites that can help you with your questions or concerns. By doing so, you can get more professional and reliable help on your Bluetooth module and driver.

-

How to Uninstall the BCM92045NMD Driver

-

If you want to uninstall the BCM92045NMD driver from your laptop, you can do so by using a device manager or a driver uninstaller tool. Here are the steps to uninstall the BCM92045NMD driver using a device manager:

-
    -
  1. Press Windows + X keys on your keyboard and select Device Manager from the menu.
  2. -
  3. Expand the Bluetooth category and find your BCM92045NMD (BRCM1018) device.
  4. -
  5. Right-click on it and select Uninstall Device from the menu.
  6. -
  7. Check the box that says Delete the driver software for this device and click on Uninstall.
  8. -
  9. Restart your laptop when prompted.
  10. -
-

If you want to use a driver uninstaller tool instead of a device manager, you can download and install a reputable tool such as IObit Uninstaller, Revo Uninstaller, or Geek Uninstaller. These tools can scan your laptop for unwanted drivers and uninstall them completely with one click. Here are -the steps to uninstall the BCM92045NMD driver using a driver uninstaller tool:

-
    -
  1. Download and install a driver uninstaller tool of your choice from its official website.
  2. -
  3. Launch the tool and click on drivers or tools section of the tool.
  4. -
  5. Find your BCM92045NMD (BRCM1018) device and click on uninstall or remove button next to it.
  6. -
  7. Wait for the tool to uninstall the BCM92045NMD driver from your device.
  8. -
  9. Restart your laptop when prompted.
  10. -
- -

How to Backup and Restore the BCM92045NMD Driver

- -

If you want to backup and restore the BCM92045NMD driver on your laptop, you can do so by using a driver backup and restore tool. This can help you in case you need to reinstall or update your driver, or if you encounter any problems or issues with your driver. Here are -the steps to backup and restore the BCM92045NMD driver using a driver backup and restore tool:

- -
    - -
  1. Download and install a driver backup and restore tool of your choice from its official website. Some of the popular tools are Driver Magician, DriverMax, and Driver Genius.
  2. - -
  3. Launch the tool and click on backup or export section of the tool.
  4. - -
  5. Select your BCM92045NMD (BRCM1018) device and click on backup or export button next to it.
  6. - -
  7. Choose a location and a name for your backup file and click on save or ok.
  8. - -
  9. Wait for the tool to backup the BCM92045NMD driver on your computer.
  10. - -
  11. To restore the BCM92045NMD driver, launch the tool again and click on restore or import section of the tool.
  12. - -
  13. Select your backup file and click on restore or import button next to it.
  14. - -
  15. Wait for the tool to restore the BCM92045NMD driver on your device.
  16. - -
  17. Restart your laptop when prompted.
  18. - -
- -

How to Fix Some Common Errors with BCM92045NMD Driver

- -

Sometimes, even after installing, updating, or uninstalling your BCM92045NMD driver, you may still encounter some errors with your Bluetooth module, such as code 10, code 43, code 52, or code 28 errors. These errors indicate that there is something wrong with your device or driver, and may prevent you from using your Bluetooth module properly. Here are some of -the common errors with BCM92045NMD driver and how to fix them:

- - -

If you want to enhance your Bluetooth experience and enjoy more features and functions with your Bluetooth module, you can also download and install some Bluetooth software or applications that are compatible with your device and driver. Some of the popular Bluetooth software or applications are Bluetooth File Transfer, Bluetooth Driver Installer, Bluetooth View, and Bluetooth Remote Control. These software or applications can help you to transfer files, install drivers, monitor devices, and control devices using your Bluetooth module. You can find these software or applications on various websites or online stores, such as Google Play, Microsoft Store, or CNET. However, make sure that you download and install them from reliable and safe sources, and that you check their reviews and ratings before using them.

-

If you have any feedback or suggestions on the BCM92045NMD driver or your Bluetooth module, you can also contact the customer service or technical support of your laptop manufacturer, Broadcom, or the driver updater tool that you used. They can provide you with more information and guidance on how to improve your Bluetooth module and driver. You can also check if there are any surveys or feedback forms available on their websites that can help you to share your opinions and experiences with them. By doing so, you can help them to improve their products and services, and also get some rewards or discounts for your participation.

-

How to Connect Your Bluetooth Device to Your Laptop Using BCM92045NMD Driver

-

Once you have installed or updated your BCM92045NMD driver on your laptop, you can connect your Bluetooth device to your laptop using the driver. This can help you to use your Bluetooth device with your laptop, such as listening to music, making calls, typing, or gaming. Here are the steps to connect your Bluetooth device to your laptop using BCM92045NMD driver:

-

-
    -
  1. Turn on your Bluetooth device and make sure that it is in pairing mode. You can check the manual or the website of your device for instructions on how to do so.
  2. -
  3. Press Windows + I keys on your keyboard and select Devices from the settings menu.
  4. -
  5. Click on Bluetooth & other devices from the left pane.
  6. -
  7. Click on Add Bluetooth or other device from the right pane.
  8. -
  9. Select Bluetooth from the list of options.
  10. -
  11. Wait for Windows to scan for and display a list of available devices.
  12. -
  13. Find your Bluetooth device and click on it.
  14. -
  15. If prompted, enter a PIN code or confirm a pairing request on your device and on your laptop.
  16. -
  17. Wait for Windows to connect your device to your laptop.
  18. -
  19. You can now use your Bluetooth device with your laptop.
  20. -
- -

How to Check the Status and Information of Your BCM92045NMD Driver

- -

If you want to check the status and information of your BCM92045NMD driver on your laptop, you can do so by using a device manager or a driver information tool. This can help you to see if your driver is working properly, and what version and date it has. Here are -the steps to check the status and information of your BCM92045NMD driver using a device manager:

- -
    - -
  1. Press Windows + X keys on your keyboard and select Device Manager from the menu.
  2. - -
  3. Expand the Bluetooth category and find your BCM92045NMD (BRCM1018) device.
  4. - -
  5. Right-click on it and select Properties from the menu.
  6. - -
  7. Click on the General tab to see the status and description of your device.
  8. - -
  9. Click on the Driver tab to see the driver provider, date, version, and digital signer of your driver.
  10. - -
  11. You can also click on Update Driver, Roll Back Driver, Disable Device, or Uninstall Device buttons to perform different actions on your driver.
  12. - -
- -

If you want to use a driver information tool instead of a device manager, you can download and install a reputable tool such as DriverView, DriverEasy, or DriverIdentifier. These tools can scan your laptop for all drivers and display detailed information about them. Here are -the steps to check the status and information of your BCM92045NMD driver using a driver information tool:

- -
    - -
  1. Download and install a driver information tool of your choice from its official website.
  2. - -
  3. Launch the tool and click on scan or view button.
  4. - -
  5. Wait for the tool to scan your laptop for all drivers and display a list of them.
  6. - -
  7. Find your BCM92045NMD (BRCM1018) device and click on it.
  8. - -
  9. You can see various information about your driver, such as name, description, version, date, manufacturer, location, file name, size, type, status, and more.
  10. - -
  11. You can also click on different buttons or links to perform different actions on your driver, such as update, backup, restore, uninstall, or export.
  12. - -
-

Conclusion

-

The BCM92045NMD (BRCM1018) is a Bluetooth module that is installed in many older HP and Lenovo laptops, and allows you to connect your laptop to other Bluetooth devices. However, you may need to download and install the latest BCM92045NMD driver for your laptop, and update it regularly, to ensure that your Bluetooth module works properly and efficiently. In this article, we showed you how to download, install, update, uninstall, backup, restore, connect, and troubleshoot the BCM92045NMD driver for your laptop, using different methods and tools. We hope that this article was helpful for you, and that you were able to fix any issues with your Bluetooth module and driver. If you have any questions or comments, please feel free to leave them below.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/English Vinglish Full Movie Download WorldHigh Quality Free4u 23.md b/spaces/1gistliPinn/ChatGPT4/Examples/English Vinglish Full Movie Download WorldHigh Quality Free4u 23.md deleted file mode 100644 index cfd9079ff40ef2604f753b4c7246534b111fa2c5..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/English Vinglish Full Movie Download WorldHigh Quality Free4u 23.md +++ /dev/null @@ -1,6 +0,0 @@ -

English Vinglish Full Movie Download Worldfree4u 23


Download 🔗 https://imgfil.com/2uxYIO



- -Salaam Namaste Hindi Movie Online - Saif Ali Khan, Preity Zinta, Arshad Warsi and ... Amazon.com: Fanaa Bollywood DVD With English Subtitles: Aamir Khan, Kajol, Yash Chopra, Kunal ... English Vinglish - really lovely, heartwarming movie. 1fdad05405
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Among Us The Most Popular Game of 2023 Now Available for Android 5.1.1.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Among Us The Most Popular Game of 2023 Now Available for Android 5.1.1.md deleted file mode 100644 index 2b78121d79d675e15360a0eceb1e68f9d0bea4d7..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Among Us The Most Popular Game of 2023 Now Available for Android 5.1.1.md +++ /dev/null @@ -1,150 +0,0 @@ - -

Download Among Us Android 5.1.1: How to Play the Popular Game on Your Phone

-

Among Us is one of the most popular games of 2022 and 2023, with millions of players around the world enjoying its thrilling and hilarious gameplay. If you want to join the fun, you can download Among Us Android 5.1.1 on your phone and play it anytime, anywhere. In this article, we will show you how to download and install the game, how to play it, and some tips and tricks to make your experience even better.

-

What is Among Us?

-

A brief introduction to the game and its features

-

Among Us is a multiplayer game that can be played online or over local WiFi with 4-15 players. The game is set in a spaceship, where you can choose to be either a crewmate or an impostor. As a crewmate, your goal is to complete tasks around the ship and find out who the impostor is before they kill everyone. As an impostor, your goal is to kill crewmates, sabotage the ship, and avoid being caught.

-

download among us android 5.1.1


Download ===== https://urlin.us/2uSZb4



-

The game has different modes, maps, roles, and settings that you can customize according to your preferences. You can also change your character's appearance, name, color, hat, pet, and skin. The game is fun, easy to play, and suitable for all ages.

-

Why is it so popular?

-

Among Us became a viral sensation in late 2022, thanks to its unique gameplay, social interaction, and meme potential. The game is highly addictive, as each round is different and unpredictable. You never know who the impostor is, who you can trust, or what will happen next.

-

The game is also very entertaining, as you can chat with other players using text or voice, accuse each other of being the impostor, lie, bluff, joke, or cooperate. The game can create hilarious moments, tense situations, and dramatic twists that will keep you hooked.

-

The game is also very accessible, as it can be played on various devices, such as PC, iOS, Android, Nintendo Switch, Xbox One, PlayStation 4/5 etc., with cross-platform compatibility. You can play with your friends or strangers from anywhere in the world.

-

How to download Among Us Android 5.1.1

-

Requirements and compatibility

-

To download Among Us Android 5.1.1 on your phone, you need to have an Android device that runs on Android 5.0 or higher (Lollipop) and has at least 250 MB of free storage space. The game is compatible with most Android devices that meet these requirements.

-

Steps to download and install the game from Google Play Store

-

The easiest way to download Among Us Android 5.1.1 on your phone is to use the Google Play Store app on your device. Here are the steps:

-
    -
  1. Open the Google Play Store app on your phone.
  2. -
  3. Search for "Among Us" in the search bar.
  4. -
  5. Select the game from the results and tap on "Install".
  6. -
  7. Wait for the game to download and install on your phone.
  8. -
  9. Once the installation is complete, tap on "Open" to launch the game.
  10. -
-

Congratulations, you have successfully downloaded Among Us Android 5.1.1 on your phone. You can now enjoy playing the game with your friends or other players online.

-

Steps to download and install the game from APK file

-

If you cannot access the Google Play Store app on your phone, or if you want to download a different version of the game, you can use an APK file instead. An APK file is a package file that contains the installation files of an Android app. Here are the steps:

-

How to download among us on android 5.1.1
-Among us apk download for android 5.1.1
-Download among us mod menu for android 5.1.1
-Among us game download free for android 5.1.1
-Download among us latest version for android 5.1.1
-Among us download link for android 5.1.1
-Download among us hack for android 5.1.1
-Among us download size for android 5.1.1
-Download among us offline for android 5.1.1
-Among us download pc and android 5.1.1
-Download among us update for android 5.1.1
-Among us download play store for android 5.1.1
-Download among us online for android 5.1.1
-Among us download error for android 5.1.1
-Download among us beta for android 5.1.1
-Among us download without google play for android 5.1.1
-Download among us airship map for android 5.1.1
-Among us download from website for android 5.1.1
-Download among us no ads for android 5.1.1
-Among us download with friends for android 5.1.1
-Download among us cracked for android 5.1.1
-Among us download in laptop for android 5.1.1
-Download among us skins for android 5.1.1
-Among us download uptodown for android 5.1.1
-Download among us voice chat for android 5.1.1
-Among us download windows 10 and android 5.1.1
-Download among us pets for android 5.1.1
-Among us download apkpure for android 5.1.1
-Download among us costumes for android 5.1.1
-Among us download chromebook and android 5.1.1
-Download among us hats for android 5.1.1
-Among us download apk mirror for android 5.1.1
-Download among us roles for android 5.1.1
-Among us download mac and android 5.1.1
-Download among us wallpapers for android 5.1.1
-Among us download happy mod for android 5.1.

-
    -
  1. Go to a trusted website that provides APK files for Android apps, such as APKPure, APKMirror, or Uptodown.
  2. -
  3. Search for "Among Us" in the website's search bar.
  4. -
  5. Select the version of the game that you want to download, such as Among Us Android 5.1.1.
  6. -
  7. Tap on "Download APK" and wait for the file to download on your phone.
  8. -
  9. Before installing the APK file, you need to enable "Unknown sources" on your phone's settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown sources and toggle it on.
  10. -
  11. Locate the downloaded APK file on your phone's file manager and tap on it.
  12. -
  13. Follow the instructions on the screen to install the game on your phone.
  14. -
  15. Once the installation is complete, tap on "Open" to launch the game.
  16. -
-

Congratulations, you have successfully downloaded Among Us Android 5.1.1 on your phone using an APK file. You can now enjoy playing the game with your friends or other players online.

-

How to play Among Us on Android 5.1.1

-

How to join or host a game online or over local WiFi

-

To play Among Us on Android 5.1.1, you need to join or host a game online or over local WiFi. Here are the steps:

-
    -
  1. Launch the game on your phone and tap on "Online" or "Local".
  2. -
  3. If you want to join a game online, you can either enter a code from a friend or browse public games available in different regions and modes. Tap on a game that suits your preferences and wait for it to start.
  4. -
  5. If you want to host a game online, you can either create a private game or a public game. Tap on "Create Game" and choose a map, mode, number of players, and other settings. You can also invite your friends by sharing your code with them. Tap on "Start" when you are ready to begin.
  6. -
  7. If you want to join or host a game over local WiFi, you need to be connected to the same WiFi network as other players. Tap on "Join Game" or "Create Game" and follow the same steps as above.
  8. -
-

You have successfully joined or hosted a game online or over local WiFi. You can now play Among Us with other players as a crewmate or an impostor.

-

How to customize your character and settings

-

To make your gameplay more fun and personalized, you can customize your character and settings in Among Us Android 5.1.1. Here are the steps:

-
    -
  1. To customize your character, tap on the laptop icon in the lobby or in-game menu. You can change your name, color, hat, pet, and skin by tapping on the options available. You can also buy more items from the shop using real money or watching ads.
  2. -
  3. To customize your settings, tap on the gear icon in the main menu or in-game menu. You can change your language, sound effects, music volume, chat type (free chat or quick chat), censor chat (on or off), confirm ejects (on or off), and other options by tapping on them.
  4. -
-

You have successfully customized your character and settings in Among Us Android 5.1.1. You can now play the game with more style and comfort.

-

How to communicate with other players using chat or voice

-

To communicate with other players in Among Us Android 5.1.1, you can use chat or voice features in the game. Here are the steps:

-
    -
  1. To use chat, tap on the chat icon in the lobby or in-game menu. You can type messages using free chat or select pre defined phrases using quick chat. You can also use emojis and stickers to express yourself. You can chat with everyone or only with your team, depending on the game mode and situation.
  2. -
  3. To use voice, you need to use a third-party app, such as Discord, Zoom, or Skype, to create or join a voice call with other players. You can also use the in-game voice chat feature, which is currently in beta testing and may not work properly. To use the in-game voice chat, tap on the microphone icon in the lobby or in-game menu and grant permission to access your microphone. You can mute or unmute yourself or other players by tapping on their icons.
  4. -
-

You have successfully communicated with other players using chat or voice in Among Us Android 5.1.1. You can now talk, strategize, accuse, lie, or joke with other players during the game.

-

How to complete tasks or sabotage as a crewmate or impostor

-

To play your role as a crewmate or impostor in Among Us Android 5.1.1, you need to complete tasks or sabotage the ship. Here are the steps:

-
    -
  1. To complete tasks as a crewmate, tap on the map icon in the upper right corner of the screen. You will see a list of tasks that you need to do and their locations on the map. You can also see yellow exclamation marks on the map that indicate where your tasks are. Tap on the map to close it and go to the task locations. Tap on the task icon to start the task and follow the instructions on the screen to finish it. Some tasks are simple and quick, while others are complex and long. Some tasks are also common or visual, which means that other players can see you doing them or verify that you have done them.
  2. -
  3. To sabotage as an impostor, tap on the sabotage icon in the lower right corner of the screen. You will see a map of the ship with different icons that represent different sabotage options. You can sabotage doors, lights, communications, oxygen, reactor, or electrical by tapping on their icons. Some sabotages require you to be near them, while others can be done from anywhere. Some sabotages also require two impostors to coordinate, while others can be done by one impostor alone. Sabotaging can help you kill crewmates, create chaos, divert attention, or win the game.
  4. -
-

You have successfully completed tasks or sabotaged as a crewmate or impostor in Among Us Android 5.1.1. You can now play your role effectively and help your team win the game.

-

Tips and tricks for playing Among Us on Android 5.1.1

-

How to use maps, vents, cameras, and other features

-

To improve your gameplay and skills in Among Us Android 5.1.1, you can use maps, vents, cameras, and other features in the game. Here are some tips:

- -

You have successfully used maps, vents, cameras, and other features in Among Us Android 5.1.1. You can now play smarter and more strategically in the game.

-

How to spot and vote out the impostor

-

To win as a crewmate in Among Us Android 5.1.1 , you need to spot and vote out the impostor before they kill everyone. Here are some tips:

- -

You have successfully spotted and voted out the impostor in Among Us Android 5.1.1. You can now win as a crewmate and save the ship.

-

How to deceive and kill as the impostor

-

To win as an impostor in Among Us Android 5.1.1, you need to deceive and kill the crewmates before they complete their tasks or find you out. Here are some tips:

- -

You have successfully deceived and killed as the impostor in Among Us Android 5.1.1. You can now win as an impostor and destroy the ship.

-

Conclusion

-

In conclusion, Among Us Android 5.1.1 is a fun and exciting game that you can download and play on your phone with your friends or other players online. The game is easy to play, but challenging to master, as you need to use your skills, strategies, and creativity to play your role as a crewmate or an impostor. The game is also very entertaining, as you can chat, joke, accuse, lie, or cooperate with other players during the game. The game is also very customizable, as you can change your character's appearance, settings, modes, maps, roles, etc., according to your preferences. The game is also very accessible, as it can be played on various devices with cross-platform compatibility.

-

If you want to download Among Us Android 5.1.1 on your phone and play it anytime, anywhere, you can follow the steps in this article to download and install the game from Google Play Store or APK file. You can also follow the steps in this article to play the game, communicate with other players, complete tasks or sabotage as a crewmate or impostor, and use maps, vents, cameras, and other features in the game. You can also use the tips and tricks in this article to improve your gameplay and skills in Among Us Android 5.1.1.

-

We hope you enjoyed this article and found it helpful and informative. If you have any questions or feedback about Among Us Android 5.1.1 or this article, please feel free to leave a comment below. Thank you for reading and happy gaming!

-

FAQs

-

Q: How much does Among Us Android 5.1.1 cost?

-

A: Among Us Android 5.1.1 is free to download and play on your phone from Google Play Store or APK file. However, you can also buy some items from the shop using real money or watching ads.

-

Q: Is Among Us Android 5.1.1 safe to download and play?

-

A: Yes, Among Us Android 5.1.1 is safe to download and play on your phone if you use a trusted source such as Google Play Store or a reputable website that provides APK files for Android apps.

-

Q: Can I play Among Us Android 5.1.1 offline?

-

A: No, you cannot play Among Us Android 5.1.1 offline on your phone. You need an internet connection to play online or over local WiFi with other players.

-

Q: Can I play Among Us Android 5.1.1 with PC or iOS players?

-

A: Yes, you can play Among Us Android 5.1.1 with PC or iOS players, as the game has cross-platform compatibility. You just need to join or host a game online using the same code or region as them.

-

Q: How can I update Among Us Android 5.1.1 to the latest version?

-

A: To update Among Us Android 5.1.1 to the latest version, you need to check for updates on Google Play Store or the website that provides APK files for Android apps. You can also enable automatic updates on your phone's settings to get the latest version of the game as soon as it is available.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/DRIFT SPIRITS MOD APK Everything You Need to Know.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/DRIFT SPIRITS MOD APK Everything You Need to Know.md deleted file mode 100644 index 9368a5a336c25ae1384faf3521956f331d2a1289..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/DRIFT SPIRITS MOD APK Everything You Need to Know.md +++ /dev/null @@ -1,116 +0,0 @@ - -

Drift Spirits Mod APK: Enjoy the Ultimate Drifting Experience

-

Do you love drifting games? Do you want to feel the thrill of sliding your car around corners at high speeds? If yes, then you should try Drift Spirits, one of the best drifting games for Android devices. And if you want to make your drifting experience even more exciting, you should download Drift Spirits Mod APK, a modified version of the game that gives you unlimited money, gold, cars, and more. In this article, we will tell you everything you need to know about Drift Spirits and Drift Spirits Mod APK, including their features, benefits, and how to download and install them on your device.

-

What is Drift Spirits?

-

Drift Spirits is a racing simulator game developed by Bandai Namco Entertainment, a famous Japanese game company. The game is dedicated to drifting, a driving technique that involves oversteering your car to make it slide sideways. The game features realistic graphics and physics, over 100 cars from various manufacturers, different modes and challenges, online multiplayer and leaderboards, and more. You can customize your car with various parts, paint jobs, stickers, and decals. You can also compete with other players from around the world in online battles and events. The game is free to play but contains in-app purchases.

-

drift spirits mod apk


Download Zip ✶✶✶ https://urlin.us/2uSZKS



-

Features of Drift Spirits

-

- Realistic graphics and physics

-

The game boasts stunning 3D graphics that make you feel like you are driving a real car on a real track. The game also uses a sophisticated physics engine that simulates the behavior of the car based on its speed, weight, traction, suspension, tires, etc. You can see the smoke, sparks, dust, and skid marks as you drift your car. You can also hear the engine sound, tire screech, collision noise, etc.

-

- Over 100 cars to choose from

-

The game offers a huge collection of cars from various brands such as Toyota, Nissan, Honda, Mazda, Subaru, Mitsubishi, BMW, Mercedes-Benz, Ferrari, Lamborghini, etc. You can find classic cars like AE86 Trueno, Skyline GT-R R34, RX-7 FD3S, etc., as well as modern cars like Supra GR A90, GT-R R35 Nismo Edition 2020 Model Year Spec V Package (N Attack Package), NSX Type R 2020 Model Year Spec V Package (N Attack Package), etc. You can also unlock special cars from anime series like Initial D.

-

drift spirits mod apk unlimited money
-drift spirits mod apk latest version
-drift spirits mod apk android 1
-drift spirits mod apk offline
-drift spirits mod apk no root
-drift spirits mod apk free download
-drift spirits mod apk 2023
-drift spirits mod apk unlimited gold
-drift spirits mod apk rexdl
-drift spirits mod apk revdl
-drift spirits mod apk hack
-drift spirits mod apk obb
-drift spirits mod apk english version
-drift spirits mod apk unlimited nitro
-drift spirits mod apk unlimited bp
-drift spirits mod apk data
-drift spirits mod apk pure
-drift spirits mod apk happymod
-drift spirits mod apk mega
-drift spirits mod apk mediafire
-drift spirits mod apk vip
-drift spirits mod apk pro
-drift spirits mod apk premium
-drift spirits mod apk full unlocked
-drift spirits mod apk all cars unlocked
-drift spirits mod apk high damage
-drift spirits mod apk god mode
-drift spirits mod apk anti ban
-drift spirits mod apk online
-drift spirits mod apk 2.1.2
-drift spirits mod apk 2.0.0
-drift spirits mod apk 1.9.9
-drift spirits mod apk 1.8.8
-drift spirits mod apk 1.7.7
-drift spirits mod apk 1.6.6
-drift spirits mod apk 1.5.5
-drift spirits mod apk 1.4.4
-drift spirits mod apk 1.3.3
-drift spirits mod apk 1.2.2
-drift spirits mod apk 1.1.1
-download drift spirits mod apk for android
-download drift spirits mod apk for ios
-download drift spirits mod apk for pc
-download drift spirits mod apk for laptop
-download drift spirits mod apk for windows 10
-download drift spirits mod apk for macbook pro
-download drift spirits mod apk for chromebook

-

- Various modes and challenges

-

The game has different modes and challenges for you to enjoy. You can play the Story Mode where you can follow the story of various characters and their drifting adventures. You can also play the Event Mode where you can participate in limited-time events and win rewards. You can also play the Battle Mode where you can challenge other players in online matches. You can also play the Time Attack Mode where you can try to beat your own or other players' records.

-

- Online multiplayer and leaderboards

-

The game allows you to compete with other players from around the world in online battles and events. You can also join a team or create your own and cooperate with other players. You can also chat with other players and send them stickers and emojis. The game has a ranking system that shows your position and performance in various categories. You can also view the replays of your or other players' drifts and learn from them.

-

What is Drift Spirits Mod APK?

-

Drift Spirits Mod APK is a modified version of the original Drift Spirits game that gives you some extra benefits and features that are not available in the official version. The mod APK is created by third-party developers who modify the game files and add some codes to unlock some features. The mod APK is not affiliated with or endorsed by Bandai Namco Entertainment or any of its partners.

-

Benefits of Drift Spirits Mod APK

-

- Unlimited money and gold

-

One of the main benefits of Drift Spirits Mod APK is that it gives you unlimited money and gold, which are the main currencies in the game. You can use them to buy new cars, upgrade your existing cars, customize your cars, buy parts, etc. You don't have to worry about running out of money or gold or spending real money to buy them.

-

- All cars unlocked and upgraded

-

Another benefit of Drift Spirits Mod APK is that it gives you access to all the cars in the game, including the special and rare ones. You don't have to complete any missions or events or spend any money or gold to unlock them. You can also upgrade your cars to the maximum level without any restrictions or costs. You can enjoy driving any car you want with the best performance and appearance.

-

- No ads and no root required

-

A third benefit of Drift Spirits Mod APK is that it removes all the ads from the game, which can be annoying and distracting. You can play the game without any interruptions or pop-ups. You also don't need to root your device to install the mod APK, which can be risky and complicated. You can install the mod APK on any Android device without any problems.

-

How to download and install Drift Spirits Mod APK?

-

If you are interested in downloading and installing Drift Spirits Mod APK on your device, you need to follow some simple steps. However, before you do that, you need to make sure that you have enough storage space on your device, a stable internet connection, and a compatible Android version (4.1 or higher). You also need to enable the installation of apps from unknown sources on your device settings. Here are the steps to download and install Drift Spirits Mod APK:

-

Steps to download and install Drift Spirits Mod APK

-

- Download the APK and OBB files from a trusted source

-

The first step is to download the APK and OBB files of Drift Spirits Mod APK from a reliable source. You can find many websites that offer these files, but you need to be careful as some of them may contain viruses or malware. You can use this link as an example, but you can also search for other sources if you want.

-

- Install the APKMODY Installer app from Google Play or APK file

-

The second step is to install the APKMODY Installer app on your device. This app is a tool that helps you install modded games with OBB files easily and safely. You can download it from Google Play Store or from this link if you prefer.

-

- Open the APKMODY Installer app and select Install APKs

-

The third step is to open the APKMODY Installer app on your device and select Install APKs from the menu. This will allow you to browse your device's storage and find the downloaded APK and OBB files of Drift Spirits Mod APK.

-

- Navigate to the location of the downloaded APK and OBB files and select them

-

The fourth step is to navigate to the location where you saved the downloaded APK and OBB files of Drift Spirits Mod APK on your device's storage. You can use any file manager app to do this. Once you find them, select them both and tap on Install.

-

- Wait for the installation to complete and launch the game

-

The fifth and final step is to wait for the installation process to finish. It may take a few minutes depending on your device's speed and performance. Once it is done, you can launch the game from your app drawer or home screen and enjoy it.

-

Conclusion

-

Drift Spirits is an amazing drifting game that lets you experience the thrill of sliding your car around corners at high speeds. It has realistic graphics and physics, over 100 cars to choose from, various modes and challenges, online multiplayer and leaderboards, and more. You can customize your car with various parts, paint jobs, stickers, and decals. You can also compete with other players from around the world in online battles and events. Drift Spirits Mod APK is a modified version of the original game that gives you unlimited money, gold, cars, and more. You can buy new cars, upgrade your existing cars, customize your cars, etc. without any restrictions or costs. You can also enjoy the game without any ads or root requirements. You can download and install Drift Spirits Mod APK on your device by following some simple steps. We hope you enjoyed this article and found it helpful. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy drifting!

-

FAQs

-

Here are some frequently asked questions about Drift Spirits and Drift Spirits Mod APK:

- - - - - - - - - - - - - - - - - - - - - - - - - -
QuestionAnswer
Is Drift Spirits Mod APK safe to use?Drift Spirits Mod APK is generally safe to use, as long as you download it from a trusted source and scan it with an antivirus app before installing it. However, there is always a risk of getting banned or losing your data when using modded games, so use it at your own discretion and responsibility.
Can I play Drift Spirits Mod APK offline?No, you cannot play Drift Spirits Mod APK offline. The game requires an internet connection to run properly and access all the features. You also need to log in with your Bandai Namco ID or Facebook account to play the game.
Can I update Drift Spirits Mod APK?No, you cannot update Drift Spirits Mod APK from the Google Play Store or the official website. If you want to update the game, you need to download and install the latest version of the mod APK from the same source where you got it. You may also need to uninstall the previous version of the mod APK before installing the new one.
Can I transfer my progress from Drift Spirits to Drift Spirits Mod APK or vice versa?No, you cannot transfer your progress from Drift Spirits to Drift Spirits Mod APK or vice versa. The mod APK uses a different server and database than the original game, so they are not compatible with each other. If you want to switch between the two versions of the game, you need to start from scratch.
Can I play Drift Spirits Mod APK with my friends who are using the original game?No, you cannot play Drift Spirits Mod APK with your friends who are using the original game. The mod APK uses a different server and database than the original game, so they are not compatible with each other. If you want to play with your friends, you need to use the same version of the game.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Descubre el nuevo modo de juego de Stickman duelista supremo y reta a tus amigos en lnea.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Descubre el nuevo modo de juego de Stickman duelista supremo y reta a tus amigos en lnea.md deleted file mode 100644 index 581fe9cc75826bbbce3ba4b7e4628adde6d4a9c0..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Descubre el nuevo modo de juego de Stickman duelista supremo y reta a tus amigos en lnea.md +++ /dev/null @@ -1,197 +0,0 @@ -
-

Descargar Stickman Duelista Supremo: Un juego de acción divertido y loco

-

¿Te gustan los juegos de acción con palitos? ¿Te divierte ver cómo se pelean entre sí con diferentes armas y física ragdoll? ¿Te apetece descargar un juego que te ofrezca horas de diversión y desafío? Entonces, te interesará conocer Stickman Duelista Supremo, un juego de peleas de palitos que te hará pasar un buen rato.

-

descargar stickman duelista supremo


Downloadhttps://urlin.us/2uSZIs



-

En este artículo, te contaremos todo lo que necesitas saber sobre este juego, cómo descargarlo en tu PC, qué opinan los usuarios que lo han probado, qué consejos y trucos hay para jugar mejor y qué características tiene. ¡Sigue leyendo y descubre por qué Stickman Duelista Supremo es un juego que no te puedes perder!

-

¿Qué es Stickman Duelista Supremo?

-

Stickman Duelista Supremo es un juego de acción desarrollado por Neron's Brother, un estudio independiente que se dedica a crear juegos divertidos y locos con palitos. El juego se lanzó en 2019 para dispositivos Android y desde entonces ha cosechado más de 100 millones de descargas y una puntuación media de 4.6 estrellas en Google Play.

-

El juego se basa en el concepto de RHG (Rock Hard Gladiators), que consiste en peleas entre personajes animados hechos con palitos. El juego utiliza una física ragdoll, que hace que los movimientos y las reacciones de los personajes sean más realistas y cómicos al mismo tiempo.

-

Un juego de peleas de palitos con física ragdoll

-

La física ragdoll es una técnica que simula el comportamiento de los cuerpos humanos al ser golpeados o caer al suelo. En lugar de usar animaciones predefinidas, la física ragdoll calcula las fuerzas que actúan sobre cada parte del cuerpo y las hace reaccionar según la gravedad, la inercia y la elasticidad.

-

Esto hace que las peleas sean más dinámicas e impredecibles, ya que nunca sabes cómo va a reaccionar tu oponente o tú mismo ante un golpe

Además, la física ragdoll hace que las peleas sean más divertidas y cómicas, ya que puedes ver cómo los personajes se retuercen, se estiran, se doblan y se desploman de formas muy graciosas. También puedes ver cómo las armas y los objetos del escenario interactúan con los personajes y los afectan de diferentes maneras.

-

Un juego con diferentes modos, mapas y armas

-

Stickman Duelista Supremo te ofrece una gran variedad de opciones para que disfrutes de las peleas de palitos a tu gusto. Puedes elegir entre diferentes modos de juego, como el modo campaña, el modo supervivencia, el modo multijugador o el modo personalizado. Cada modo tiene sus propias reglas, objetivos y recompensas.

-

También puedes elegir entre diferentes mapas, que tienen distintos escenarios, obstáculos y trampas. Algunos mapas son más grandes y abiertos, mientras que otros son más pequeños y cerrados. Algunos mapas tienen elementos interactivos, como botones, palancas o explosivos, que pueden cambiar el curso de la batalla.

-

Por supuesto, no podemos olvidarnos de las armas, que son una parte esencial del juego. Hay más de 50 armas diferentes que puedes usar para atacar a tus enemigos, desde espadas, hachas y lanzas hasta pistolas, escopetas y rifles. Cada arma tiene sus propias características, como el alcance, el daño, la velocidad y la cadencia. Algunas armas también tienen efectos especiales, como el fuego, el hielo o la electricidad.

-

descargar stickman duelista supremo apk
-descargar stickman duelista supremo mod
-descargar stickman duelista supremo para pc
-descargar stickman duelista supremo hackeado
-descargar stickman duelista supremo gratis
-descargar stickman duelista supremo ultima version
-descargar stickman duelista supremo online
-descargar stickman duelista supremo sin internet
-descargar stickman duelista supremo full
-descargar stickman duelista supremo mega
-descargar juego de stickman duelista supremo
-como descargar stickman duelista supremo
-donde descargar stickman duelista supremo
-descargar e instalar stickman duelista supremo
-descargar y jugar stickman duelista supremo
-descargar stickman duelist supreme
-download stickman duelist supreme apk
-download stickman duelist supreme mod
-download stickman duelist supreme for pc
-download stickman duelist supreme hacked
-download stickman duelist supreme free
-download stickman duelist supreme latest version
-download stickman duelist supreme online
-download stickman duelist supreme offline
-download stickman duelist supreme full
-download stickman duelist supreme mega
-download game of stickman duelist supreme
-how to download stickman duelist supreme
-where to download stickman duelist supreme
-download and install stickman duelist supreme
-download and play stickman duelist supreme
-baixar stickman duelist supreme apk
-baixar stickman duelist supreme mod
-baixar stickman duelist supreme para pc
-baixar stickman duelist supreme hackeado
-baixar stickman duelist supreme gratis
-baixar stickman duelist supreme ultima versao
-baixar stickman duelist supreme online
-baixar stickman duelist supreme offline
-baixar stickman duelist supreme completo
-baixar stickman duelist supreme mega
-baixar jogo de stickman duelist supreme
-como baixar stickman duelist supreme
-onde baixar stickman duelist supreme
-baixar e instalar stickman duelist supreme
-baixar e jogar stickman duelist supreme

-

Un juego que se puede jugar solo o con amigos

-

Otra ventaja de Stickman Duelista Supremo es que se puede jugar tanto solo como con amigos. Si quieres jugar solo, puedes enfrentarte a la inteligencia artificial del juego, que tiene diferentes niveles de dificultad y personalidad. Puedes jugar al modo campaña, donde tendrás que superar diferentes misiones y desafíos, o al modo supervivencia, donde tendrás que resistir el mayor tiempo posible contra oleadas de enemigos.

-

Si quieres jugar con amigos, puedes hacerlo de dos formas: online o local. Si juegas online, puedes conectarte con otros jugadores de todo el mundo y competir contra ellos o cooperar con ellos. Puedes jugar al modo multijugador, donde podrás crear o unirte a salas de juego con hasta 6 jugadores, o al modo personalizado, donde podrás crear tus propias reglas y condiciones.

-

Si juegas local, puedes usar un solo dispositivo o varios dispositivos conectados por wifi o bluetooth. Puedes jugar al modo pantalla dividida, donde podrás compartir la pantalla con otro jugador y controlar cada uno a un personaje, o al modo multicontrolador, donde podrás usar varios dispositivos como mandos para controlar a los personajes en una sola pantalla.

-

¿Cómo descargar Stickman Duelista Supremo en PC?

-

Aunque Stickman Duelista Supremo es un juego diseñado para dispositivos Android, también se puede jugar en PC con la ayuda de un emulador. Un emulador es un programa que te permite ejecutar aplicaciones de Android en tu ordenador. De esta forma, podrás disfrutar del juego con una pantalla más grande, un mejor sonido y un mayor rendimiento.

-

Hay muchos emuladores de Android disponibles en el mercado, pero nosotros te recomendamos tres opciones: GameLoop, BlueStacks y Google Play. A continuación te explicamos cómo descargar Stickman Duelista Supremo en PC usando cada uno de estos emuladores.

-

Usando GameLoop, un emulador de Android

-

GameLoop es un emulador de Android desarrollado por Tencent Games, una empresa china que se dedica a crear juegos populares como PUBG Mobile o Call of Duty Mobile. GameLoop está especializado en juegos de acción y ofrece una buena experiencia de juego con gráficos fluidos y controles personalizables.

-

Para descargar Stickman Duelista Supremo en PC usando GameLoop, sigue estos pasos:

-
    -
  1. Descarga e instala GameLoop desde su página web oficial
  2. Ejecuta GameLoop y busca Stickman Duelista Supremo en la barra de búsqueda
  3. -
  4. Selecciona el juego y haz clic en el botón de instalar
  5. -
  6. Espera a que se complete la descarga y la instalación
  7. -
  8. Abre el juego y disfruta de la acción
  9. -
-

Usando BlueStacks, otro emulador de Android

-

BlueStacks es otro emulador de Android muy popular y usado por millones de usuarios. BlueStacks te permite jugar a cualquier juego o aplicación de Android en tu PC con una buena calidad y rendimiento. Además, tiene funciones adicionales como el modo multiventana, el mapeo de teclas o la grabación de pantalla.

-

Para descargar Stickman Duelista Supremo en PC usando BlueStacks, sigue estos pasos:

-
    -
  1. Descarga e instala BlueStacks desde su página web oficial
  2. -
  3. Ejecuta BlueStacks y accede a tu cuenta de Google
  4. -
  5. Busca Stickman Duelista Supremo en la tienda de Google Play que se encuentra dentro del emulador
  6. -
  7. Selecciona el juego y haz clic en el botón de instalar
  8. -
  9. Espera a que se complete la descarga y la instalación
  10. -
  11. Abre el juego y diviértete con las peleas de palitos
  12. -
-

Usando Google Play, la tienda oficial de aplicaciones de Android

-

Google Play es la tienda oficial de aplicaciones de Android, donde puedes encontrar millones de juegos y aplicaciones para tu dispositivo móvil. Google Play también tiene una versión web, que te permite acceder a la tienda desde tu navegador y descargar las aplicaciones en tu PC. Sin embargo, para poder usar esta opción, necesitas tener un dispositivo Android vinculado a tu cuenta de Google.

-

Para descargar Stickman Duelista Supremo en PC usando Google Play, sigue estos pasos:

-
    -
  1. Accede a la página web de Google Play desde tu navegador
  2. -
  3. Inicia sesión con tu cuenta de Google
  4. -
  5. Busca Stickman Duelista Supremo en la barra de búsqueda
  6. -
  7. Selecciona el juego y haz clic en el botón de instalar
  8. -
  9. Elije el dispositivo Android al que quieres enviar el juego (debe estar conectado a internet)
  10. -
  11. Espera a que se complete la descarga y la instalación en tu dispositivo Android
  12. -
  13. Copia el archivo APK del juego desde tu dispositivo Android a tu PC usando un cable USB o una conexión inalámbrica
  14. -
  15. Ejecuta el archivo APK en tu PC usando un emulador como GameLoop o BlueStacks
  16. -
  17. Lanza el juego y prepárate para la acción
  18. -
-

¿Qué opinan los usuarios de Stickman Duelista Supremo?

-

Stickman Duelista Supremo es un juego que ha recibido muchas opiniones positivas por parte de los usuarios que lo han jugado. La mayoría de los usuarios coinciden en que es un juego muy divertido, adictivo y desafiante, que ofrece una gran variedad de opciones y posibilidades. Sin embargo, también hay algunos usuarios que han encontrado algunos problemas o aspectos mejorables en el juego. A continuación te mostramos las ventajas y desventajas del juego según las reseñas de los usuarios.

-

Las ventajas del juego según las reseñas

-

Estas son algunas de las ventajas del juego que más han destacado los usuarios en sus reseñas:

- -

Las desventajas del juego según las reseñas

-

Estas son algunas de las desventajas del juego que más han mencionado los usuarios en sus reseñas:

- -

La puntuación media del juego en diferentes plataformas

-

A pesar de las desventajas mencionadas, Stickman Duelista Supremo es un juego que ha recibido una puntuación media muy alta en diferentes plataformas. Estas son algunas de las puntuaciones que ha obtenido el juego en distintos sitios web:

- - - - - - - - - - - - - - - - - - - - - - - - - -
PlataformaPuntuación
Google Play4.6 / 5
App StoreNo disponible
Aptoide4.5 / 5
MALAVIDA9 / 10
Juegos Friv 20204.2 / 5
-

¿Qué consejos y trucos hay para jugar a Stickman Duelista Supremo?

-

Stickman Duelista Supremo es un juego que requiere de habilidad, estrategia y reflejos para ganar las peleas. Si quieres mejorar tu nivel y vencer a tus rivales, te recomendamos seguir estos consejos y trucos:

-

Usar un arma que se adapte a tu estilo de juego

-

Cada arma tiene sus ventajas y desventajas, por lo que debes elegir la que mejor se adapte a tu forma de jugar. Por ejemplo, si te gusta atacar desde lejos y con precisión, puedes usar un rifle de francotirador o un arco. Si te gusta atacar desde cerca y con fuerza, puedes usar una espada o una mot sierra. Si te gusta atacar con rapidez y sorpresa, puedes usar una pistola o un cuchillo.

-

Practicar con bots difíciles para mejorar tus habilidades

-

Una buena forma de mejorar tus habilidades es practicar con bots difíciles, que te pondrán a prueba y te harán aprender de tus errores. Puedes jugar al modo supervivencia o al modo personalizado y elegir el nivel de dificultad de los bots, desde fácil hasta extremo. Así podrás entrenar tu puntería, tu esquiva, tu movimiento y tu estrategia.

-

Aprovechar las opciones de gravedad, KO instantáneo y escudo de energía

-

Otra forma de mejorar tu juego es aprovechar las opciones que te ofrece el juego, como la gravedad, el KO instantáneo y el escudo de energía. Estas opciones pueden cambiar el resultado de una pelea si las usas bien. Por ejemplo, puedes usar la gravedad para hacer que los objetos caigan sobre tus enemigos o para saltar más alto. Puedes usar el KO instantáneo para acabar con tus enemigos de un solo golpe o para evitar que te maten. Puedes usar el escudo de energía para protegerte de los ataques o para empujar a tus enemigos.

-

¿Qué características tiene Stickman Duelista Supremo?

-

Stickman Duelista Supremo es un juego que tiene muchas características que lo hacen único y especial. Estas son algunas de las características que tiene el juego:

-

Un juego con gráficos 2D y música épica

-

El juego tiene unos gráficos 2D simples pero atractivos, que le dan un estilo propio y original. Los personajes, las armas y los objetos están hechos con palitos, lo que les da un aspecto divertido y caricaturesco. Los escenarios están llenos de detalles y colores, lo que les da un aspecto variado y dinámico. El juego también tiene una música épica que acompaña a las peleas, lo que les da un tono emocionante y dramático.

-

Un juego con editor de mapas y personajes personalizables

-

El juego tiene un editor de mapas y personajes personalizables, que te permite crear tus propios escenarios y personajes a tu gusto. Puedes elegir entre diferentes elementos, como el suelo, las paredes, los obstáculos, los objetos, las armas, los colores, las formas, etc. Puedes guardar tus creaciones y compartirlas con otros usuarios o jugar con ellas en el modo personalizado.

-

Un juego con torneo de jefes y modo mini juego de fútbol

-

El juego tiene un torneo de jefes y un modo mini juego de fútbol, que te ofrecen más diversión y desafío. El torneo de jefes consiste en enfrentarte a los jefes más poderosos del juego, que tienen habilidades especiales y armas únicas. El modo mini juego de fútbol consiste en jugar al fútbol con los personajes del juego, usando sus armas como balones.

-

Conclusión

-

Stickman Duelista Supremo es un juego de acción divertido y loco, que te hará pasar un buen rato con las peleas de palitos. El juego tiene una física ragdoll que hace que las peleas sean más realistas y cómicas, diferentes modos de juego que te ofrecen variedad y diversión, diferentes mapas y armas que te ofrecen originalidad y posibilidades, un multijugador que te permite jugar con amigos o con otros jugadores de todo el mundo, un editor de mapas y personajes personalizables que te permite crear tus propios escenarios y personajes, un torneo de jefes y un modo mini juego de fútbol que te ofrecen más desafío y diversión.

-

Si quieres descargar Stickman Duelista Supremo en PC, puedes hacerlo usando un emulador de Android como GameLoop, BlueStacks o Google Play. Así podrás disfrutar del juego con una pantalla más grande, un mejor sonido y un mayor rendimiento.

-

Si quieres mejorar tu nivel y vencer a tus rivales , puedes seguir los consejos y trucos que te hemos dado, como usar un arma que se adapte a tu estilo de juego, practicar con bots difíciles para mejorar tus habilidades y aprovechar las opciones de gravedad, KO instantáneo y escudo de energía.

-

En definitiva, Stickman Duelista Supremo es un juego que te recomendamos descargar y probar si te gustan los juegos de acción con palitos. Te aseguramos que no te arrepentirás y que te divertirás mucho con este juego.

-

Preguntas frecuentes

-

A continuación, te respondemos a algunas de las preguntas más frecuentes que tienen los usuarios sobre Stickman Duelista Supremo:

-
    -
  1. ¿Es gratis Stickman Duelista Supremo?
  2. -

    Sí, Stickman Duelista Supremo es un juego totalmente gratis que puedes descargar y jugar sin pagar nada. Sin embargo, el juego tiene anuncios y compras integradas que puedes usar para obtener ventajas o personalizar el juego.

    -
  3. ¿Es seguro Stickman Duelista Supremo?
  4. -

    Sí, Stickman Duelista Supremo es un juego seguro que no contiene virus ni malware. Sin embargo, debes tener cuidado con las fuentes desde las que descargas el juego, ya que algunas pueden ser fraudulentas o dañinas. Te recomendamos descargar el juego desde la tienda oficial de Google Play o desde sitios web confiables.

    -
  5. ¿Es apto para niños Stickman Duelista Supremo?
  6. -

    No, Stickman Duelista Supremo no es un juego apto para niños, ya que contiene violencia, sangre y armas. El juego está clasificado como PEGI 16, lo que significa que no es adecuado para menores de 16 años. Si eres menor de edad, debes consultar con tus padres o tutores antes de jugar a este juego.

    -
  7. ¿Qué requisitos tiene Stickman Duelista Supremo?
  8. -

    Stickman Duelista Supremo es un juego que no tiene unos requisitos muy exigentes, por lo que puede funcionar en la mayoría de los dispositivos Android. Los requisitos mínimos son los siguientes:

    - -
  9. ¿Cómo contactar con el desarrollador de Stickman Duelista Supremo?
  10. -

    Si tienes alguna duda, sugerencia o problema con el juego, puedes contactar con el desarrollador de Stickman Duelista Supremo, Neron's Brother, a través de los siguientes medios:

    -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Cleaner How to Remove Unwanted Files and Boost Your System Performance.md b/spaces/1phancelerku/anime-remove-background/Cleaner How to Remove Unwanted Files and Boost Your System Performance.md deleted file mode 100644 index 64b1edf33ae6a0335a90101ad666426feeb7fa48..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Cleaner How to Remove Unwanted Files and Boost Your System Performance.md +++ /dev/null @@ -1,147 +0,0 @@ -
    -

    How to Choose the Best Cleaner for Your Needs

    -

    Cleaners are substances or devices that are used to remove dirt, stains, grease, germs, and other unwanted materials from various surfaces and objects. They are essential for maintaining hygiene, health, beauty, and functionality in our homes, workplaces, and public spaces. However, not all cleaners are created equal. There are many types of cleaners available in the market, each with different purposes, ingredients, benefits, and drawbacks. How do you know which one is best for your needs?

    -

    In this article, we will provide you with some information on how to choose the best cleaner for your needs, based on some reliable sources we found on the web. We will also discuss some of the benefits and risks of using cleaners, as well as some tips for using them safely and effectively.

    -

    cleaner


    Download Zip »»» https://jinyurl.com/2uNPf9



    -

    How to Choose the Best Cleaner

    -

    Choosing the best cleaner depends on several factors, such as:

    - -

    Let's look at each factor in more detail.

    -

    The type of surface or object you want to clean

    -

    Different surfaces or objects may require different types of cleaners. For example:

    -

    best cleaning company in Sydney
    -where to find a reliable commercial cleaner
    -black thin silicon case for iphone xs
    -how to find a good cleaning company
    -best and most affordable cleaning company in Sydney
    -commercial cleaning provider near me
    -top keywords for cleaning business
    -how to use a steam cleaner for carpets
    -natural cleaner for stainless steel appliances
    -best vacuum cleaner for pet hair and hardwood floors
    -how to make your own cleaner with vinegar
    -best cleaner for laminate wood floors
    -professional cleaner for air ducts
    -best cleaner for leather sofa
    -how to hire a cleaner for Airbnb
    -best cleaner for tile and grout
    -eco-friendly cleaner for bathroom
    -best cleaner for oven racks
    -how to become a cleaner in Australia
    -best cleaner for windows and mirrors
    -how to clean a coffee maker with vinegar
    -best cleaner for granite countertops
    -how to get rid of mold with bleach
    -best cleaner for hardwood floors and furniture
    -how to disinfect a mattress with baking soda
    -best cleaner for shower doors and walls
    -how to clean a dishwasher with lemon
    -best cleaner for car interior and exterior
    -how to remove stains from carpet with hydrogen peroxide
    -best cleaner for vinyl plank flooring
    -how to clean a microwave with vinegar and water
    -best cleaner for stainless steel cookware
    -how to polish silver with toothpaste
    -best cleaner for glass stove top
    -how to clean a washing machine with baking soda and vinegar
    -best cleaner for brass and copper
    -how to clean a dryer vent with a leaf blower
    -best cleaner for marble countertops and floors
    -how to clean a toilet with coke and bleach
    -best cleaner for concrete driveway and patio

    - -

    It is important to use the right type of cleaner for the right type of surface or object, as using the wrong one may cause damage or ineffective cleaning.

    -

    The type and level of dirt or stain you want to remove

    -

    Different types and levels of dirt or stains may require different types and strengths of cleaners. For example:

    - -

    It is important to use the right type and strength of cleaner for the right type and level of dirt or stain, as using the wrong one may cause insufficient cleaning or unnecessary harm.

    -

    The safety and environmental impact of the cleaner

    -

    Different cleaners may have different effects on your health and the environment. For example:

    - -

    It is important to use the safest and most environmentally friendly cleaner possible for your needs, as using a harmful one may cause health problems or environmental damage.

    -

    The cost and availability of the cleaner

    -

    Different cleaners may have different prices and availability in the market. For example:

    - -

    It is important to use the most affordable and accessible cleaner possible for your needs, as using an expensive or scarce one may cause financial burden or inconvenience .

    -

    The ease of use and effectiveness of the cleaner

    -

    Different cleaners may have different levels of ease of use and effectiveness in cleaning. For example:

    - -

    It is important to use the easiest and most effective cleaner possible for your needs, as using a difficult or less effective one may cause frustration or dissatisfaction .

    -

    Benefits of Using Cleaners

    Using cleaners can have many benefits for you and your surroundings. Some of the benefits are:

    - -

    Risks of Using Cleaners

    -

    Using cleaners can also have some risks for you and your surroundings. Some of the risks are:

    - -

    Tips for Using Cleaners Safely and Effectively

    -

    To minimize the risks and maximize the benefits of using cleaners, you should follow some tips for using them safely and effectively. Some of the tips are:

    - -

    Conclusion

    -

    Cleaners are useful and important for keeping our surroundings clean and healthy. However, they are not all the same. There are many types of cleaners available in the market, each with different purposes, ingredients, benefits, and drawbacks. To choose the best cleaner for your needs, you should consider the following factors:

    - -

    You should also follow some tips for using cleaners safely and effectively, such as reading and following the instructions and warnings on the label, wearing protective gear, using the right amount and method of application, testing the cleaner on a small or hidden area, and rinsing and drying the surface or object after cleaning.

    -

    By choosing and using cleaners wisely, you can enjoy the benefits of having a clean and healthy environment, while avoiding the risks of harming yourself or your surroundings.

    -

    FAQs

    -

    Here are some frequently asked questions about cleaners:

    -

    What is the difference between cleaning and disinfecting?

    -

    Cleaning is the process of removing dirt, dust, stains, and other visible impurities from surfaces or objects. Disinfecting is the process of killing or inactivating germs, such as bacteria, viruses, fungi, and parasites, that can cause diseases. Cleaning does not necessarily disinfect, and disinfecting does not necessarily clean. Therefore, it is recommended to do both cleaning and disinfecting for optimal hygiene.

    -

    What are some natural or homemade cleaners?

    -

    Some natural or homemade cleaners are vinegar, lemon juice, baking soda, salt, hydrogen peroxide, tea tree oil, etc. These substances can be used alone or in combination to clean various surfaces or objects. They are usually safe, cheap, and eco-friendly. However, they may not be as effective as synthetic or chemical cleaners for some types of dirt or stains. They may also have some drawbacks such as unpleasant smell, limited shelf life, or potential reactions with certain materials.

    -

    How do I dispose of cleaners properly?

    -

    To dispose of cleaners properly, you should follow the instructions and warnings on the label. Some cleaners can be poured down the drain with plenty of water. Some cleaners need to be neutralized with another substance before disposal. Some cleaners need to be taken to a hazardous waste collection site or facility. You should never mix different cleaners together, as they may cause dangerous reactions. You should also recycle or reuse the containers or packaging of the cleaners if possible.

    -

    How do I store cleaners safely?

    -

    To store cleaners safely, you should keep them in their original containers with their labels intact. You should keep them away from children, pets, food, heat, fire, or sunlight. You should store them in a cool, dry, and well-ventilated place. You should also keep them separate from each other, especially those that may react with each other.

    -

    How do I make cleaners more effective?

    -

    To make cleaners more effective, you should follow some tips such as:

    -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/801artistry/RVC801/lib/uvr5_pack/lib_v5/layers_123821KB.py b/spaces/801artistry/RVC801/lib/uvr5_pack/lib_v5/layers_123821KB.py deleted file mode 100644 index b82f06bb4993cd63f076e68d7e24185269b1bc42..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/lib/uvr5_pack/lib_v5/layers_123821KB.py +++ /dev/null @@ -1,118 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from . import spec_utils - - -class Conv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(Conv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nout, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - bias=False, - ), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class SeperableConv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(SeperableConv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nin, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - groups=nin, - bias=False, - ), - nn.Conv2d(nin, nout, kernel_size=1, bias=False), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class Encoder(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU): - super(Encoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ) - - def __call__(self, x): - skip = self.conv1(x) - h = self.conv2(skip) - - return h, skip - - -class Decoder(nn.Module): - def __init__( - self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False - ): - super(Decoder, self).__init__() - self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def __call__(self, x, skip=None): - x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) - if skip is not None: - skip = spec_utils.crop_center(skip, x) - x = torch.cat([x, skip], dim=1) - h = self.conv(x) - - if self.dropout is not None: - h = self.dropout(h) - - return h - - -class ASPPModule(nn.Module): - def __init__(self, nin, nout, dilations=(4, 8, 16), activ=nn.ReLU): - super(ASPPModule, self).__init__() - self.conv1 = nn.Sequential( - nn.AdaptiveAvgPool2d((1, None)), - Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ), - ) - self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ) - self.conv3 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[0], dilations[0], activ=activ - ) - self.conv4 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[1], dilations[1], activ=activ - ) - self.conv5 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.bottleneck = nn.Sequential( - Conv2DBNActiv(nin * 5, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1) - ) - - def forward(self, x): - _, _, h, w = x.size() - feat1 = F.interpolate( - self.conv1(x), size=(h, w), mode="bilinear", align_corners=True - ) - feat2 = self.conv2(x) - feat3 = self.conv3(x) - feat4 = self.conv4(x) - feat5 = self.conv5(x) - out = torch.cat((feat1, feat2, feat3, feat4, feat5), dim=1) - bottle = self.bottleneck(out) - return bottle diff --git a/spaces/AIConsultant/MusicGen/audiocraft/utils/samples/__init__.py b/spaces/AIConsultant/MusicGen/audiocraft/utils/samples/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/utils/samples/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/metrics.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/metrics.py deleted file mode 100644 index 16905224c665491b9869d7641c1fe17689816a4b..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/metrics.py +++ /dev/null @@ -1,69 +0,0 @@ -import logging - -import numpy as np -import scipy -import torch -from sklearn.metrics import average_precision_score, roc_auc_score - -logger = logging.getLogger(f'main.{__name__}') - -def metrics(targets, outputs, topk=(1, 5)): - """ - Adapted from https://github.com/hche11/VGGSound/blob/master/utils.py - - Calculate statistics including mAP, AUC, and d-prime. - Args: - output: 2d tensors, (dataset_size, classes_num) - before softmax - target: 1d tensors, (dataset_size, ) - topk: tuple - Returns: - metric_dict: a dict of metrics - """ - metrics_dict = dict() - - num_cls = outputs.shape[-1] - - # accuracy@k - _, preds = torch.topk(outputs, k=max(topk), dim=1) - correct_for_maxtopk = preds == targets.view(-1, 1).expand_as(preds) - for k in topk: - metrics_dict[f'accuracy_{k}'] = float(correct_for_maxtopk[:, :k].sum() / correct_for_maxtopk.shape[0]) - - # avg precision, average roc_auc, and dprime - targets = torch.nn.functional.one_hot(targets, num_classes=num_cls) - - # ids of the predicted classes (same as softmax) - targets_pred = torch.softmax(outputs, dim=1) - - targets = targets.numpy() - targets_pred = targets_pred.numpy() - - # one-vs-rest - avg_p = [average_precision_score(targets[:, c], targets_pred[:, c], average=None) for c in range(num_cls)] - try: - roc_aucs = [roc_auc_score(targets[:, c], targets_pred[:, c], average=None) for c in range(num_cls)] - except ValueError: - logger.warning('Weird... Some classes never occured in targets. Do not trust the metrics.') - roc_aucs = np.array([0.5]) - avg_p = np.array([0]) - - metrics_dict['mAP'] = np.mean(avg_p) - metrics_dict['mROCAUC'] = np.mean(roc_aucs) - # Percent point function (ppf) (inverse of cdf — percentiles). - metrics_dict['dprime'] = scipy.stats.norm().ppf(metrics_dict['mROCAUC']) * np.sqrt(2) - - return metrics_dict - - -if __name__ == '__main__': - targets = torch.tensor([3, 3, 1, 2, 1, 0]) - outputs = torch.tensor([ - [1.2, 1.3, 1.1, 1.5], - [1.3, 1.4, 1.0, 1.1], - [1.5, 1.1, 1.4, 1.3], - [1.0, 1.2, 1.4, 1.5], - [1.2, 1.3, 1.1, 1.1], - [1.2, 1.1, 1.1, 1.1], - ]).float() - metrics_dict = metrics(targets, outputs, topk=(1, 3)) - print(metrics_dict) diff --git a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/losses_audio/vggishish/logger.py b/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/losses_audio/vggishish/logger.py deleted file mode 100644 index c9737cc165654ce51bb2204636ce78f34acd0a9e..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/losses_audio/vggishish/logger.py +++ /dev/null @@ -1,87 +0,0 @@ -import logging -import os -import time -from shutil import copytree, ignore_patterns - -import torch -from omegaconf import OmegaConf -from torch.utils.tensorboard import SummaryWriter, summary - - -class LoggerWithTBoard(SummaryWriter): - - def __init__(self, cfg): - # current time stamp and experiment log directory - self.start_time = time.strftime('%y-%m-%dT%H-%M-%S', time.localtime()) - self.logdir = os.path.join(cfg.logdir, self.start_time) - # init tboard - super().__init__(self.logdir) - # backup the cfg - OmegaConf.save(cfg, os.path.join(self.log_dir, 'cfg.yaml')) - # backup the code state - if cfg.log_code_state: - dest_dir = os.path.join(self.logdir, 'code') - copytree(os.getcwd(), dest_dir, ignore=ignore_patterns(*cfg.patterns_to_ignore)) - - # init logger which handles printing and logging mostly same things to the log file - self.print_logger = logging.getLogger('main') - self.print_logger.setLevel(logging.INFO) - msgfmt = '[%(levelname)s] %(asctime)s - %(name)s \n %(message)s' - datefmt = '%d %b %Y %H:%M:%S' - formatter = logging.Formatter(msgfmt, datefmt) - # stdout - sh = logging.StreamHandler() - sh.setLevel(logging.DEBUG) - sh.setFormatter(formatter) - self.print_logger.addHandler(sh) - # log file - fh = logging.FileHandler(os.path.join(self.log_dir, 'log.txt')) - fh.setLevel(logging.INFO) - fh.setFormatter(formatter) - self.print_logger.addHandler(fh) - - self.print_logger.info(f'Saving logs and checkpoints @ {self.logdir}') - - def log_param_num(self, model): - param_num = sum(p.numel() for p in model.parameters() if p.requires_grad) - self.print_logger.info(f'The number of parameters: {param_num/1e+6:.3f} mil') - self.add_scalar('num_params', param_num, 0) - return param_num - - def log_iter_loss(self, loss, iter, phase): - self.add_scalar(f'{phase}/loss_iter', loss, iter) - - def log_epoch_loss(self, loss, epoch, phase): - self.add_scalar(f'{phase}/loss', loss, epoch) - self.print_logger.info(f'{phase} ({epoch}): loss {loss:.3f};') - - def log_epoch_metrics(self, metrics_dict, epoch, phase): - for metric, val in metrics_dict.items(): - self.add_scalar(f'{phase}/{metric}', val, epoch) - metrics_dict = {k: round(v, 4) for k, v in metrics_dict.items()} - self.print_logger.info(f'{phase} ({epoch}) metrics: {metrics_dict};') - - def log_test_metrics(self, metrics_dict, hparams_dict, best_epoch): - allowed_types = (int, float, str, bool, torch.Tensor) - hparams_dict = {k: v for k, v in hparams_dict.items() if isinstance(v, allowed_types)} - metrics_dict = {f'test/{k}': round(v, 4) for k, v in metrics_dict.items()} - exp, ssi, sei = summary.hparams(hparams_dict, metrics_dict) - self.file_writer.add_summary(exp) - self.file_writer.add_summary(ssi) - self.file_writer.add_summary(sei) - for k, v in metrics_dict.items(): - self.add_scalar(k, v, best_epoch) - self.print_logger.info(f'test ({best_epoch}) metrics: {metrics_dict};') - - def log_best_model(self, model, loss, epoch, optimizer, metrics_dict): - model_name = model.__class__.__name__ - self.best_model_path = os.path.join(self.logdir, f'{model_name}-{self.start_time}.pt') - checkpoint = { - 'loss': loss, - 'metrics': metrics_dict, - 'epoch': epoch, - 'optimizer': optimizer.state_dict(), - 'model': model.state_dict(), - } - torch.save(checkpoint, self.best_model_path) - self.print_logger.info(f'Saved model in {self.best_model_path}') diff --git a/spaces/AIZero2HeroBootcamp/StaticHTML5Playcanvas/README.md b/spaces/AIZero2HeroBootcamp/StaticHTML5Playcanvas/README.md deleted file mode 100644 index 2a5dd5503f844a896dcb33adf14dc4ffffbe795d..0000000000000000000000000000000000000000 --- a/spaces/AIZero2HeroBootcamp/StaticHTML5Playcanvas/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: StaticHTML5Playcanvas -emoji: 👀 -colorFrom: gray -colorTo: red -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/midas/midas/midas_net_custom.py b/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/midas/midas/midas_net_custom.py deleted file mode 100644 index 50e4acb5e53d5fabefe3dde16ab49c33c2b7797c..0000000000000000000000000000000000000000 --- a/spaces/Adapter/CoAdapter/ldm/modules/extra_condition/midas/midas/midas_net_custom.py +++ /dev/null @@ -1,128 +0,0 @@ -"""MidashNet: Network for monocular depth estimation trained by mixing several datasets. -This file contains code that is adapted from -https://github.com/thomasjpfan/pytorch_refinenet/blob/master/pytorch_refinenet/refinenet/refinenet_4cascade.py -""" -import torch -import torch.nn as nn - -from .base_model import BaseModel -from .blocks import FeatureFusionBlock, FeatureFusionBlock_custom, Interpolate, _make_encoder - - -class MidasNet_small(BaseModel): - """Network for monocular depth estimation. - """ - - def __init__(self, path=None, features=64, backbone="efficientnet_lite3", non_negative=True, exportable=True, channels_last=False, align_corners=True, - blocks={'expand': True}): - """Init. - - Args: - path (str, optional): Path to saved model. Defaults to None. - features (int, optional): Number of features. Defaults to 256. - backbone (str, optional): Backbone network for encoder. Defaults to resnet50 - """ - print("Loading weights: ", path) - - super(MidasNet_small, self).__init__() - - use_pretrained = False if path else True - - self.channels_last = channels_last - self.blocks = blocks - self.backbone = backbone - - self.groups = 1 - - features1=features - features2=features - features3=features - features4=features - self.expand = False - if "expand" in self.blocks and self.blocks['expand'] == True: - self.expand = True - features1=features - features2=features*2 - features3=features*4 - features4=features*8 - - self.pretrained, self.scratch = _make_encoder(self.backbone, features, use_pretrained, groups=self.groups, expand=self.expand, exportable=exportable) - - self.scratch.activation = nn.ReLU(False) - - self.scratch.refinenet4 = FeatureFusionBlock_custom(features4, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners) - self.scratch.refinenet3 = FeatureFusionBlock_custom(features3, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners) - self.scratch.refinenet2 = FeatureFusionBlock_custom(features2, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners) - self.scratch.refinenet1 = FeatureFusionBlock_custom(features1, self.scratch.activation, deconv=False, bn=False, align_corners=align_corners) - - - self.scratch.output_conv = nn.Sequential( - nn.Conv2d(features, features//2, kernel_size=3, stride=1, padding=1, groups=self.groups), - Interpolate(scale_factor=2, mode="bilinear"), - nn.Conv2d(features//2, 32, kernel_size=3, stride=1, padding=1), - self.scratch.activation, - nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0), - nn.ReLU(True) if non_negative else nn.Identity(), - nn.Identity(), - ) - - if path: - self.load(path) - - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input data (image) - - Returns: - tensor: depth - """ - if self.channels_last==True: - print("self.channels_last = ", self.channels_last) - x.contiguous(memory_format=torch.channels_last) - - - layer_1 = self.pretrained.layer1(x) - layer_2 = self.pretrained.layer2(layer_1) - layer_3 = self.pretrained.layer3(layer_2) - layer_4 = self.pretrained.layer4(layer_3) - - layer_1_rn = self.scratch.layer1_rn(layer_1) - layer_2_rn = self.scratch.layer2_rn(layer_2) - layer_3_rn = self.scratch.layer3_rn(layer_3) - layer_4_rn = self.scratch.layer4_rn(layer_4) - - - path_4 = self.scratch.refinenet4(layer_4_rn) - path_3 = self.scratch.refinenet3(path_4, layer_3_rn) - path_2 = self.scratch.refinenet2(path_3, layer_2_rn) - path_1 = self.scratch.refinenet1(path_2, layer_1_rn) - - out = self.scratch.output_conv(path_1) - - return torch.squeeze(out, dim=1) - - - -def fuse_model(m): - prev_previous_type = nn.Identity() - prev_previous_name = '' - previous_type = nn.Identity() - previous_name = '' - for name, module in m.named_modules(): - if prev_previous_type == nn.Conv2d and previous_type == nn.BatchNorm2d and type(module) == nn.ReLU: - # print("FUSED ", prev_previous_name, previous_name, name) - torch.quantization.fuse_modules(m, [prev_previous_name, previous_name, name], inplace=True) - elif prev_previous_type == nn.Conv2d and previous_type == nn.BatchNorm2d: - # print("FUSED ", prev_previous_name, previous_name) - torch.quantization.fuse_modules(m, [prev_previous_name, previous_name], inplace=True) - # elif previous_type == nn.Conv2d and type(module) == nn.ReLU: - # print("FUSED ", previous_name, name) - # torch.quantization.fuse_modules(m, [previous_name, name], inplace=True) - - prev_previous_type = previous_type - prev_previous_name = previous_name - previous_type = type(module) - previous_name = name \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/checkbox/Checkbox.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/checkbox/Checkbox.d.ts deleted file mode 100644 index fb2e6098b7bebd90f5e0f4c2386ecf4c9e3ee832..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/checkbox/Checkbox.d.ts +++ /dev/null @@ -1,2 +0,0 @@ -import Checkbox from '../../../plugins/checkbox'; -export default Checkbox; \ No newline at end of file diff --git a/spaces/AkitoP/umamusume_bert_vits2/utils.py b/spaces/AkitoP/umamusume_bert_vits2/utils.py deleted file mode 100644 index 5f98aafadb83a9f341d6d9d3401c6c3101485b4e..0000000000000000000000000000000000000000 --- a/spaces/AkitoP/umamusume_bert_vits2/utils.py +++ /dev/null @@ -1,356 +0,0 @@ -import os -import glob -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logger = logging.getLogger(__name__) - - -def load_checkpoint(checkpoint_path, model, optimizer=None, skip_optimizer=False): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location="cpu") - iteration = checkpoint_dict["iteration"] - learning_rate = checkpoint_dict["learning_rate"] - if ( - optimizer is not None - and not skip_optimizer - and checkpoint_dict["optimizer"] is not None - ): - optimizer.load_state_dict(checkpoint_dict["optimizer"]) - elif optimizer is None and not skip_optimizer: - # else: Disable this line if Infer and resume checkpoint,then enable the line upper - new_opt_dict = optimizer.state_dict() - new_opt_dict_params = new_opt_dict["param_groups"][0]["params"] - new_opt_dict["param_groups"] = checkpoint_dict["optimizer"]["param_groups"] - new_opt_dict["param_groups"][0]["params"] = new_opt_dict_params - optimizer.load_state_dict(new_opt_dict) - - saved_state_dict = checkpoint_dict["model"] - if hasattr(model, "module"): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - - new_state_dict = {} - for k, v in state_dict.items(): - try: - # assert "emb_g" not in k - new_state_dict[k] = saved_state_dict[k] - assert saved_state_dict[k].shape == v.shape, ( - saved_state_dict[k].shape, - v.shape, - ) - except: - # For upgrading from the old version - if "ja_bert_proj" in k: - v = torch.zeros_like(v) - logger.warn( - f"Seems you are using the old version of the model, the {k} is automatically set to zero for backward compatibility" - ) - else: - logger.error(f"{k} is not in the checkpoint") - - new_state_dict[k] = v - - if hasattr(model, "module"): - model.module.load_state_dict(new_state_dict, strict=False) - else: - model.load_state_dict(new_state_dict, strict=False) - - logger.info( - "Loaded checkpoint '{}' (iteration {})".format(checkpoint_path, iteration) - ) - - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info( - "Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path - ) - ) - if hasattr(model, "module"): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save( - { - "model": state_dict, - "iteration": iteration, - "optimizer": optimizer.state_dict(), - "learning_rate": learning_rate, - }, - checkpoint_path, - ) - - -def summarize( - writer, - global_step, - scalars={}, - histograms={}, - images={}, - audios={}, - audio_sampling_rate=22050, -): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats="HWC") - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger("matplotlib") - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", interpolation="none") - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="") - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger("matplotlib") - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow( - alignment.transpose(), aspect="auto", origin="lower", interpolation="none" - ) - fig.colorbar(im, ax=ax) - xlabel = "Decoder timestep" - if info is not None: - xlabel += "\n\n" + info - plt.xlabel(xlabel) - plt.ylabel("Encoder timestep") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="") - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding="utf-8") as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument( - "-c", - "--config", - type=str, - default="./configs/base.json", - help="JSON file for configuration", - ) - parser.add_argument("-m", "--model", type=str, required=True, help="Model name") - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r", encoding="utf-8") as f: - data = f.read() - with open(config_save_path, "w", encoding="utf-8") as f: - f.write(data) - else: - with open(config_save_path, "r", vencoding="utf-8") as f: - data = f.read() - config = json.loads(data) - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def clean_checkpoints(path_to_models="logs/44k/", n_ckpts_to_keep=2, sort_by_time=True): - """Freeing up space by deleting saved ckpts - - Arguments: - path_to_models -- Path to the model directory - n_ckpts_to_keep -- Number of ckpts to keep, excluding G_0.pth and D_0.pth - sort_by_time -- True -> chronologically delete ckpts - False -> lexicographically delete ckpts - """ - import re - - ckpts_files = [ - f - for f in os.listdir(path_to_models) - if os.path.isfile(os.path.join(path_to_models, f)) - ] - - def name_key(_f): - return int(re.compile("._(\\d+)\\.pth").match(_f).group(1)) - - def time_key(_f): - return os.path.getmtime(os.path.join(path_to_models, _f)) - - sort_key = time_key if sort_by_time else name_key - - def x_sorted(_x): - return sorted( - [f for f in ckpts_files if f.startswith(_x) and not f.endswith("_0.pth")], - key=sort_key, - ) - - to_del = [ - os.path.join(path_to_models, fn) - for fn in (x_sorted("G")[:-n_ckpts_to_keep] + x_sorted("D")[:-n_ckpts_to_keep]) - ] - - def del_info(fn): - return logger.info(f".. Free up space by deleting ckpt {fn}") - - def del_routine(x): - return [os.remove(x), del_info(x)] - - [del_routine(fn) for fn in to_del] - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r", encoding="utf-8") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r", encoding="utf-8") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn( - "{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - ) - ) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn( - "git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8] - ) - ) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams: - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/__init__.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/CODE_OF_CONDUCT.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/CODE_OF_CONDUCT.md deleted file mode 100644 index 05954dfae2798fd0707c3c100ced94855a938eac..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,130 +0,0 @@ - -# Contributor Covenant Code of Conduct - -## Our Pledge - -We as members, contributors, and leaders pledge to make participation in our -community a harassment-free experience for everyone, regardless of age, body -size, visible or invisible disability, ethnicity, sex characteristics, gender -identity and expression, level of experience, education, socio-economic status, -nationality, personal appearance, race, religion, or sexual identity -and orientation. - -We pledge to act and interact in ways that contribute to an open, welcoming, -diverse, inclusive, and healthy community. - -## Our Standards - -Examples of behavior that contributes to a positive environment for our -community include: - -* Demonstrating empathy and kindness toward other people -* Being respectful of differing opinions, viewpoints, and experiences -* Giving and gracefully accepting constructive feedback -* Accepting responsibility and apologizing to those affected by our mistakes, - and learning from the experience -* Focusing on what is best not just for us as individuals, but for the - overall diffusers community - -Examples of unacceptable behavior include: - -* The use of sexualized language or imagery, and sexual attention or - advances of any kind -* Trolling, insulting or derogatory comments, and personal or political attacks -* Public or private harassment -* Publishing others' private information, such as a physical or email - address, without their explicit permission -* Spamming issues or PRs with links to projects unrelated to this library -* Other conduct which could reasonably be considered inappropriate in a - professional setting - -## Enforcement Responsibilities - -Community leaders are responsible for clarifying and enforcing our standards of -acceptable behavior and will take appropriate and fair corrective action in -response to any behavior that they deem inappropriate, threatening, offensive, -or harmful. - -Community leaders have the right and responsibility to remove, edit, or reject -comments, commits, code, wiki edits, issues, and other contributions that are -not aligned to this Code of Conduct, and will communicate reasons for moderation -decisions when appropriate. - -## Scope - -This Code of Conduct applies within all community spaces, and also applies when -an individual is officially representing the community in public spaces. -Examples of representing our community include using an official e-mail address, -posting via an official social media account, or acting as an appointed -representative at an online or offline event. - -## Enforcement - -Instances of abusive, harassing, or otherwise unacceptable behavior may be -reported to the community leaders responsible for enforcement at -feedback@huggingface.co. -All complaints will be reviewed and investigated promptly and fairly. - -All community leaders are obligated to respect the privacy and security of the -reporter of any incident. - -## Enforcement Guidelines - -Community leaders will follow these Community Impact Guidelines in determining -the consequences for any action they deem in violation of this Code of Conduct: - -### 1. Correction - -**Community Impact**: Use of inappropriate language or other behavior deemed -unprofessional or unwelcome in the community. - -**Consequence**: A private, written warning from community leaders, providing -clarity around the nature of the violation and an explanation of why the -behavior was inappropriate. A public apology may be requested. - -### 2. Warning - -**Community Impact**: A violation through a single incident or series -of actions. - -**Consequence**: A warning with consequences for continued behavior. No -interaction with the people involved, including unsolicited interaction with -those enforcing the Code of Conduct, for a specified period of time. This -includes avoiding interactions in community spaces as well as external channels -like social media. Violating these terms may lead to a temporary or -permanent ban. - -### 3. Temporary Ban - -**Community Impact**: A serious violation of community standards, including -sustained inappropriate behavior. - -**Consequence**: A temporary ban from any sort of interaction or public -communication with the community for a specified period of time. No public or -private interaction with the people involved, including unsolicited interaction -with those enforcing the Code of Conduct, is allowed during this period. -Violating these terms may lead to a permanent ban. - -### 4. Permanent Ban - -**Community Impact**: Demonstrating a pattern of violation of community -standards, including sustained inappropriate behavior, harassment of an -individual, or aggression toward or disparagement of classes of individuals. - -**Consequence**: A permanent ban from any sort of public interaction within -the community. - -## Attribution - -This Code of Conduct is adapted from the [Contributor Covenant][homepage], -version 2.0, available at -https://www.contributor-covenant.org/version/2/0/code_of_conduct.html. - -Community Impact Guidelines were inspired by [Mozilla's code of conduct -enforcement ladder](https://github.com/mozilla/diversity). - -[homepage]: https://www.contributor-covenant.org - -For answers to common questions about this code of conduct, see the FAQ at -https://www.contributor-covenant.org/faq. Translations are available at -https://www.contributor-covenant.org/translations. diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/mulit_token_textual_inversion/textual_inversion.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/mulit_token_textual_inversion/textual_inversion.py deleted file mode 100644 index 63b6c3860a2967db967561581fa060f5dae64082..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/research_projects/mulit_token_textual_inversion/textual_inversion.py +++ /dev/null @@ -1,927 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# Copyright 2023 The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and - -import argparse -import logging -import math -import os -import random -from pathlib import Path - -import numpy as np -import PIL -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -import transformers -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import ProjectConfiguration, set_seed -from huggingface_hub import create_repo, upload_folder -from multi_token_clip import MultiTokenCLIPTokenizer - -# TODO: remove and import from diffusers.utils when the new version of diffusers is released -from packaging import version -from PIL import Image -from torch.utils.data import Dataset -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import CLIPTextModel - -import diffusers -from diffusers import ( - AutoencoderKL, - DDPMScheduler, - DiffusionPipeline, - DPMSolverMultistepScheduler, - StableDiffusionPipeline, - UNet2DConditionModel, -) -from diffusers.optimization import get_scheduler -from diffusers.utils import check_min_version, is_wandb_available -from diffusers.utils.import_utils import is_xformers_available - - -if version.parse(version.parse(PIL.__version__).base_version) >= version.parse("9.1.0"): - PIL_INTERPOLATION = { - "linear": PIL.Image.Resampling.BILINEAR, - "bilinear": PIL.Image.Resampling.BILINEAR, - "bicubic": PIL.Image.Resampling.BICUBIC, - "lanczos": PIL.Image.Resampling.LANCZOS, - "nearest": PIL.Image.Resampling.NEAREST, - } -else: - PIL_INTERPOLATION = { - "linear": PIL.Image.LINEAR, - "bilinear": PIL.Image.BILINEAR, - "bicubic": PIL.Image.BICUBIC, - "lanczos": PIL.Image.LANCZOS, - "nearest": PIL.Image.NEAREST, - } -# ------------------------------------------------------------------------------ - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.14.0.dev0") - -logger = get_logger(__name__) - - -def add_tokens(tokenizer, text_encoder, placeholder_token, num_vec_per_token=1, initializer_token=None): - """ - Add tokens to the tokenizer and set the initial value of token embeddings - """ - tokenizer.add_placeholder_tokens(placeholder_token, num_vec_per_token=num_vec_per_token) - text_encoder.resize_token_embeddings(len(tokenizer)) - token_embeds = text_encoder.get_input_embeddings().weight.data - placeholder_token_ids = tokenizer.encode(placeholder_token, add_special_tokens=False) - if initializer_token: - token_ids = tokenizer.encode(initializer_token, add_special_tokens=False) - for i, placeholder_token_id in enumerate(placeholder_token_ids): - token_embeds[placeholder_token_id] = token_embeds[token_ids[i * len(token_ids) // num_vec_per_token]] - else: - for i, placeholder_token_id in enumerate(placeholder_token_ids): - token_embeds[placeholder_token_id] = torch.randn_like(token_embeds[placeholder_token_id]) - return placeholder_token - - -def save_progress(tokenizer, text_encoder, accelerator, save_path): - for placeholder_token in tokenizer.token_map: - placeholder_token_ids = tokenizer.encode(placeholder_token, add_special_tokens=False) - learned_embeds = accelerator.unwrap_model(text_encoder).get_input_embeddings().weight[placeholder_token_ids] - if len(placeholder_token_ids) == 1: - learned_embeds = learned_embeds[None] - learned_embeds_dict = {placeholder_token: learned_embeds.detach().cpu()} - torch.save(learned_embeds_dict, save_path) - - -def load_multitoken_tokenizer(tokenizer, text_encoder, learned_embeds_dict): - for placeholder_token in learned_embeds_dict: - placeholder_embeds = learned_embeds_dict[placeholder_token] - num_vec_per_token = placeholder_embeds.shape[0] - placeholder_embeds = placeholder_embeds.to(dtype=text_encoder.dtype) - add_tokens(tokenizer, text_encoder, placeholder_token, num_vec_per_token=num_vec_per_token) - placeholder_token_ids = tokenizer.encode(placeholder_token, add_special_tokens=False) - token_embeds = text_encoder.get_input_embeddings().weight.data - for i, placeholder_token_id in enumerate(placeholder_token_ids): - token_embeds[placeholder_token_id] = placeholder_embeds[i] - - -def load_multitoken_tokenizer_from_automatic(tokenizer, text_encoder, automatic_dict, placeholder_token): - """ - Automatic1111's tokens have format - {'string_to_token': {'*': 265}, 'string_to_param': {'*': tensor([[ 0.0833, 0.0030, 0.0057, ..., -0.0264, -0.0616, -0.0529], - [ 0.0058, -0.0190, -0.0584, ..., -0.0025, -0.0945, -0.0490], - [ 0.0916, 0.0025, 0.0365, ..., -0.0685, -0.0124, 0.0728], - [ 0.0812, -0.0199, -0.0100, ..., -0.0581, -0.0780, 0.0254]], - requires_grad=True)}, 'name': 'FloralMarble-400', 'step': 399, 'sd_checkpoint': '4bdfc29c', 'sd_checkpoint_name': 'SD2.1-768'} - """ - learned_embeds_dict = {} - learned_embeds_dict[placeholder_token] = automatic_dict["string_to_param"]["*"] - load_multitoken_tokenizer(tokenizer, text_encoder, learned_embeds_dict) - - -def get_mask(tokenizer, accelerator): - # Get the mask of the weights that won't change - mask = torch.ones(len(tokenizer)).to(accelerator.device, dtype=torch.bool) - for placeholder_token in tokenizer.token_map: - placeholder_token_ids = tokenizer.encode(placeholder_token, add_special_tokens=False) - for i in range(len(placeholder_token_ids)): - mask = mask & (torch.arange(len(tokenizer)) != placeholder_token_ids[i]).to(accelerator.device) - return mask - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--progressive_tokens_max_steps", - type=int, - default=2000, - help="The number of steps until all tokens will be used.", - ) - parser.add_argument( - "--progressive_tokens", - action="store_true", - help="Progressively train the tokens. For example, first train for 1 token, then 2 tokens and so on.", - ) - parser.add_argument("--vector_shuffle", action="store_true", help="Shuffling tokens durint training") - parser.add_argument( - "--num_vec_per_token", - type=int, - default=1, - help=( - "The number of vectors used to represent the placeholder token. The higher the number, the better the" - " result at the cost of editability. This can be fixed by prompt editing." - ), - ) - parser.add_argument( - "--save_steps", - type=int, - default=500, - help="Save learned_embeds.bin every X updates steps.", - ) - parser.add_argument( - "--only_save_embeds", - action="store_true", - default=False, - help="Save only the embeddings for the new concept.", - ) - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--revision", - type=str, - default=None, - required=False, - help="Revision of pretrained model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--train_data_dir", type=str, default=None, required=True, help="A folder containing the training data." - ) - parser.add_argument( - "--placeholder_token", - type=str, - default=None, - required=True, - help="A token to use as a placeholder for the concept.", - ) - parser.add_argument( - "--initializer_token", type=str, default=None, required=True, help="A token to use as initializer word." - ) - parser.add_argument("--learnable_property", type=str, default="object", help="Choose between 'object' and 'style'") - parser.add_argument("--repeats", type=int, default=100, help="How many times to repeat the training data.") - parser.add_argument( - "--output_dir", - type=str, - default="text-inversion-model", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution." - ) - parser.add_argument( - "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument("--num_train_epochs", type=int, default=100) - parser.add_argument( - "--max_train_steps", - type=int, - default=5000, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--gradient_checkpointing", - action="store_true", - help="Whether or not to use gradient checkpointing to save memory at the expense of slower backward pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=1e-4, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=False, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument( - "--dataloader_num_workers", - type=int, - default=0, - help=( - "Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process." - ), - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default="no", - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose" - "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10." - "and an Nvidia Ampere GPU." - ), - ) - parser.add_argument( - "--allow_tf32", - action="store_true", - help=( - "Whether or not to allow TF32 on Ampere GPUs. Can be used to speed up training. For more information, see" - " https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices" - ), - ) - parser.add_argument( - "--report_to", - type=str, - default="tensorboard", - help=( - 'The integration to report the results and logs to. Supported platforms are `"tensorboard"`' - ' (default), `"wandb"` and `"comet_ml"`. Use `"all"` to report to all integrations.' - ), - ) - parser.add_argument( - "--validation_prompt", - type=str, - default=None, - help="A prompt that is used during validation to verify that the model is learning.", - ) - parser.add_argument( - "--num_validation_images", - type=int, - default=4, - help="Number of images that should be generated during validation with `validation_prompt`.", - ) - parser.add_argument( - "--validation_epochs", - type=int, - default=50, - help=( - "Run validation every X epochs. Validation consists of running the prompt" - " `args.validation_prompt` multiple times: `args.num_validation_images`" - " and logging the images." - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - parser.add_argument( - "--checkpointing_steps", - type=int, - default=500, - help=( - "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming" - " training using `--resume_from_checkpoint`." - ), - ) - parser.add_argument( - "--checkpoints_total_limit", - type=int, - default=None, - help=( - "Max number of checkpoints to store. Passed as `total_limit` to the `Accelerator` `ProjectConfiguration`." - " See Accelerator::save_state https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.save_state" - " for more docs" - ), - ) - parser.add_argument( - "--resume_from_checkpoint", - type=str, - default=None, - help=( - "Whether training should be resumed from a previous checkpoint. Use a path saved by" - ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' - ), - ) - parser.add_argument( - "--enable_xformers_memory_efficient_attention", action="store_true", help="Whether or not to use xformers." - ) - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - if args.train_data_dir is None: - raise ValueError("You must specify a train data directory.") - - return args - - -imagenet_templates_small = [ - "a photo of a {}", - "a rendering of a {}", - "a cropped photo of the {}", - "the photo of a {}", - "a photo of a clean {}", - "a photo of a dirty {}", - "a dark photo of the {}", - "a photo of my {}", - "a photo of the cool {}", - "a close-up photo of a {}", - "a bright photo of the {}", - "a cropped photo of a {}", - "a photo of the {}", - "a good photo of the {}", - "a photo of one {}", - "a close-up photo of the {}", - "a rendition of the {}", - "a photo of the clean {}", - "a rendition of a {}", - "a photo of a nice {}", - "a good photo of a {}", - "a photo of the nice {}", - "a photo of the small {}", - "a photo of the weird {}", - "a photo of the large {}", - "a photo of a cool {}", - "a photo of a small {}", -] - -imagenet_style_templates_small = [ - "a painting in the style of {}", - "a rendering in the style of {}", - "a cropped painting in the style of {}", - "the painting in the style of {}", - "a clean painting in the style of {}", - "a dirty painting in the style of {}", - "a dark painting in the style of {}", - "a picture in the style of {}", - "a cool painting in the style of {}", - "a close-up painting in the style of {}", - "a bright painting in the style of {}", - "a cropped painting in the style of {}", - "a good painting in the style of {}", - "a close-up painting in the style of {}", - "a rendition in the style of {}", - "a nice painting in the style of {}", - "a small painting in the style of {}", - "a weird painting in the style of {}", - "a large painting in the style of {}", -] - - -class TextualInversionDataset(Dataset): - def __init__( - self, - data_root, - tokenizer, - learnable_property="object", # [object, style] - size=512, - repeats=100, - interpolation="bicubic", - flip_p=0.5, - set="train", - placeholder_token="*", - center_crop=False, - vector_shuffle=False, - progressive_tokens=False, - ): - self.data_root = data_root - self.tokenizer = tokenizer - self.learnable_property = learnable_property - self.size = size - self.placeholder_token = placeholder_token - self.center_crop = center_crop - self.flip_p = flip_p - self.vector_shuffle = vector_shuffle - self.progressive_tokens = progressive_tokens - self.prop_tokens_to_load = 0 - - self.image_paths = [os.path.join(self.data_root, file_path) for file_path in os.listdir(self.data_root)] - - self.num_images = len(self.image_paths) - self._length = self.num_images - - if set == "train": - self._length = self.num_images * repeats - - self.interpolation = { - "linear": PIL_INTERPOLATION["linear"], - "bilinear": PIL_INTERPOLATION["bilinear"], - "bicubic": PIL_INTERPOLATION["bicubic"], - "lanczos": PIL_INTERPOLATION["lanczos"], - }[interpolation] - - self.templates = imagenet_style_templates_small if learnable_property == "style" else imagenet_templates_small - self.flip_transform = transforms.RandomHorizontalFlip(p=self.flip_p) - - def __len__(self): - return self._length - - def __getitem__(self, i): - example = {} - image = Image.open(self.image_paths[i % self.num_images]) - - if not image.mode == "RGB": - image = image.convert("RGB") - - placeholder_string = self.placeholder_token - text = random.choice(self.templates).format(placeholder_string) - - example["input_ids"] = self.tokenizer.encode( - text, - padding="max_length", - truncation=True, - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - vector_shuffle=self.vector_shuffle, - prop_tokens_to_load=self.prop_tokens_to_load if self.progressive_tokens else 1.0, - )[0] - - # default to score-sde preprocessing - img = np.array(image).astype(np.uint8) - - if self.center_crop: - crop = min(img.shape[0], img.shape[1]) - ( - h, - w, - ) = ( - img.shape[0], - img.shape[1], - ) - img = img[(h - crop) // 2 : (h + crop) // 2, (w - crop) // 2 : (w + crop) // 2] - - image = Image.fromarray(img) - image = image.resize((self.size, self.size), resample=self.interpolation) - - image = self.flip_transform(image) - image = np.array(image).astype(np.uint8) - image = (image / 127.5 - 1.0).astype(np.float32) - - example["pixel_values"] = torch.from_numpy(image).permute(2, 0, 1) - return example - - -def main(): - args = parse_args() - logging_dir = os.path.join(args.output_dir, args.logging_dir) - accelerator_project_config = ProjectConfiguration( - total_limit=args.checkpoints_total_limit, project_dir=args.output_dir, logging_dir=logging_dir - ) - - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with=args.report_to, - project_config=accelerator_project_config, - ) - - if args.report_to == "wandb": - if not is_wandb_available(): - raise ImportError("Make sure to install wandb if you want to use it for logging during training.") - import wandb - - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - logger.info(accelerator.state, main_process_only=False) - if accelerator.is_local_main_process: - transformers.utils.logging.set_verbosity_warning() - diffusers.utils.logging.set_verbosity_info() - else: - transformers.utils.logging.set_verbosity_error() - diffusers.utils.logging.set_verbosity_error() - - # If passed along, set the training seed now. - if args.seed is not None: - set_seed(args.seed) - - # Handle the repository creation - if accelerator.is_main_process: - if args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - if args.push_to_hub: - repo_id = create_repo( - repo_id=args.hub_model_id or Path(args.output_dir).name, exist_ok=True, token=args.hub_token - ).repo_id - - # Load tokenizer - if args.tokenizer_name: - tokenizer = MultiTokenCLIPTokenizer.from_pretrained(args.tokenizer_name) - elif args.pretrained_model_name_or_path: - tokenizer = MultiTokenCLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") - - # Load scheduler and models - noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") - text_encoder = CLIPTextModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision - ) - vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) - unet = UNet2DConditionModel.from_pretrained( - args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision - ) - if is_xformers_available(): - try: - unet.enable_xformers_memory_efficient_attention() - except Exception as e: - logger.warning( - "Could not enable memory efficient attention. Make sure xformers is installed" - f" correctly and a GPU is available: {e}" - ) - add_tokens(tokenizer, text_encoder, args.placeholder_token, args.num_vec_per_token, args.initializer_token) - - # Freeze vae and unet - vae.requires_grad_(False) - unet.requires_grad_(False) - # Freeze all parameters except for the token embeddings in text encoder - text_encoder.text_model.encoder.requires_grad_(False) - text_encoder.text_model.final_layer_norm.requires_grad_(False) - text_encoder.text_model.embeddings.position_embedding.requires_grad_(False) - - if args.gradient_checkpointing: - # Keep unet in train mode if we are using gradient checkpointing to save memory. - # The dropout cannot be != 0 so it doesn't matter if we are in eval or train mode. - unet.train() - text_encoder.gradient_checkpointing_enable() - unet.enable_gradient_checkpointing() - - if args.enable_xformers_memory_efficient_attention: - if is_xformers_available(): - import xformers - - xformers_version = version.parse(xformers.__version__) - if xformers_version == version.parse("0.0.16"): - logger.warn( - "xFormers 0.0.16 cannot be used for training in some GPUs. If you observe problems during training, please update xFormers to at least 0.0.17. See https://huggingface.co/docs/diffusers/main/en/optimization/xformers for more details." - ) - unet.enable_xformers_memory_efficient_attention() - else: - raise ValueError("xformers is not available. Make sure it is installed correctly") - - # Enable TF32 for faster training on Ampere GPUs, - # cf https://pytorch.org/docs/stable/notes/cuda.html#tensorfloat-32-tf32-on-ampere-devices - if args.allow_tf32: - torch.backends.cuda.matmul.allow_tf32 = True - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Initialize the optimizer - optimizer = torch.optim.AdamW( - text_encoder.get_input_embeddings().parameters(), # only optimize the embeddings - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - # Dataset and DataLoaders creation: - train_dataset = TextualInversionDataset( - data_root=args.train_data_dir, - tokenizer=tokenizer, - size=args.resolution, - placeholder_token=args.placeholder_token, - repeats=args.repeats, - learnable_property=args.learnable_property, - center_crop=args.center_crop, - set="train", - ) - train_dataloader = torch.utils.data.DataLoader( - train_dataset, batch_size=args.train_batch_size, shuffle=True, num_workers=args.dataloader_num_workers - ) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * accelerator.num_processes, - num_training_steps=args.max_train_steps * accelerator.num_processes, - ) - - # Prepare everything with our `accelerator`. - text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - text_encoder, optimizer, train_dataloader, lr_scheduler - ) - - # For mixed precision training we cast the unet and vae weights to half-precision - # as these models are only used for inference, keeping weights in full precision is not required. - weight_dtype = torch.float32 - if accelerator.mixed_precision == "fp16": - weight_dtype = torch.float16 - elif accelerator.mixed_precision == "bf16": - weight_dtype = torch.bfloat16 - - # Move vae and unet to device and cast to weight_dtype - unet.to(accelerator.device, dtype=weight_dtype) - vae.to(accelerator.device, dtype=weight_dtype) - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("textual_inversion", config=vars(args)) - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - global_step = 0 - first_epoch = 0 - - # Potentially load in the weights and states from a previous save - if args.resume_from_checkpoint: - if args.resume_from_checkpoint != "latest": - path = os.path.basename(args.resume_from_checkpoint) - else: - # Get the most recent checkpoint - dirs = os.listdir(args.output_dir) - dirs = [d for d in dirs if d.startswith("checkpoint")] - dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) - path = dirs[-1] if len(dirs) > 0 else None - - if path is None: - accelerator.print( - f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run." - ) - args.resume_from_checkpoint = None - else: - accelerator.print(f"Resuming from checkpoint {path}") - accelerator.load_state(os.path.join(args.output_dir, path)) - global_step = int(path.split("-")[1]) - - resume_global_step = global_step * args.gradient_accumulation_steps - first_epoch = global_step // num_update_steps_per_epoch - resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps) - - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(global_step, args.max_train_steps), disable=not accelerator.is_local_main_process) - progress_bar.set_description("Steps") - - # keep original embeddings as reference - orig_embeds_params = accelerator.unwrap_model(text_encoder).get_input_embeddings().weight.data.clone() - - for epoch in range(first_epoch, args.num_train_epochs): - text_encoder.train() - for step, batch in enumerate(train_dataloader): - # Skip steps until we reach the resumed step - if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step: - if step % args.gradient_accumulation_steps == 0: - progress_bar.update(1) - continue - if args.progressive_tokens: - train_dataset.prop_tokens_to_load = float(global_step) / args.progressive_tokens_max_steps - - with accelerator.accumulate(text_encoder): - # Convert images to latent space - latents = vae.encode(batch["pixel_values"].to(dtype=weight_dtype)).latent_dist.sample().detach() - latents = latents * vae.config.scaling_factor - - # Sample noise that we'll add to the latents - noise = torch.randn_like(latents) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint(0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device) - timesteps = timesteps.long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning - encoder_hidden_states = text_encoder(batch["input_ids"])[0].to(dtype=weight_dtype) - - # Predict the noise residual - model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample - - # Get the target for loss depending on the prediction type - if noise_scheduler.config.prediction_type == "epsilon": - target = noise - elif noise_scheduler.config.prediction_type == "v_prediction": - target = noise_scheduler.get_velocity(latents, noise, timesteps) - else: - raise ValueError(f"Unknown prediction type {noise_scheduler.config.prediction_type}") - - loss = F.mse_loss(model_pred.float(), target.float(), reduction="mean") - - accelerator.backward(loss) - - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Let's make sure we don't update any embedding weights besides the newly added token - index_no_updates = get_mask(tokenizer, accelerator) - with torch.no_grad(): - accelerator.unwrap_model(text_encoder).get_input_embeddings().weight[ - index_no_updates - ] = orig_embeds_params[index_no_updates] - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - if global_step % args.save_steps == 0: - save_path = os.path.join(args.output_dir, f"learned_embeds-steps-{global_step}.bin") - save_progress(tokenizer, text_encoder, accelerator, save_path) - - if global_step % args.checkpointing_steps == 0: - if accelerator.is_main_process: - save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") - accelerator.save_state(save_path) - logger.info(f"Saved state to {save_path}") - - logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - accelerator.log(logs, step=global_step) - - if global_step >= args.max_train_steps: - break - - if accelerator.is_main_process and args.validation_prompt is not None and epoch % args.validation_epochs == 0: - logger.info( - f"Running validation... \n Generating {args.num_validation_images} images with prompt:" - f" {args.validation_prompt}." - ) - # create pipeline (note: unet and vae are loaded again in float32) - pipeline = DiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - text_encoder=accelerator.unwrap_model(text_encoder), - tokenizer=tokenizer, - unet=unet, - vae=vae, - revision=args.revision, - torch_dtype=weight_dtype, - ) - pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config) - pipeline = pipeline.to(accelerator.device) - pipeline.set_progress_bar_config(disable=True) - - # run inference - generator = ( - None if args.seed is None else torch.Generator(device=accelerator.device).manual_seed(args.seed) - ) - images = [] - for _ in range(args.num_validation_images): - with torch.autocast("cuda"): - image = pipeline(args.validation_prompt, num_inference_steps=25, generator=generator).images[0] - images.append(image) - - for tracker in accelerator.trackers: - if tracker.name == "tensorboard": - np_images = np.stack([np.asarray(img) for img in images]) - tracker.writer.add_images("validation", np_images, epoch, dataformats="NHWC") - if tracker.name == "wandb": - tracker.log( - { - "validation": [ - wandb.Image(image, caption=f"{i}: {args.validation_prompt}") - for i, image in enumerate(images) - ] - } - ) - - del pipeline - torch.cuda.empty_cache() - - # Create the pipeline using using the trained modules and save it. - accelerator.wait_for_everyone() - if accelerator.is_main_process: - if args.push_to_hub and args.only_save_embeds: - logger.warn("Enabling full model saving because --push_to_hub=True was specified.") - save_full_model = True - else: - save_full_model = not args.only_save_embeds - if save_full_model: - pipeline = StableDiffusionPipeline.from_pretrained( - args.pretrained_model_name_or_path, - text_encoder=accelerator.unwrap_model(text_encoder), - vae=vae, - unet=unet, - tokenizer=tokenizer, - ) - pipeline.save_pretrained(args.output_dir) - # Save the newly trained embeddings - save_path = os.path.join(args.output_dir, "learned_embeds.bin") - save_progress(tokenizer, text_encoder, accelerator, save_path) - - if args.push_to_hub: - upload_folder( - repo_id=repo_id, - folder_path=args.output_dir, - commit_message="End of training", - ignore_patterns=["step_*", "epoch_*"], - ) - - accelerator.end_training() - - -if __name__ == "__main__": - main() diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_lms_discrete_flax.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_lms_discrete_flax.py deleted file mode 100644 index f96e602afe121a09876b0ff7db1d3192e441e32a..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_lms_discrete_flax.py +++ /dev/null @@ -1,283 +0,0 @@ -# Copyright 2023 Katherine Crowson and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import flax -import jax.numpy as jnp -from scipy import integrate - -from ..configuration_utils import ConfigMixin, register_to_config -from .scheduling_utils_flax import ( - CommonSchedulerState, - FlaxKarrasDiffusionSchedulers, - FlaxSchedulerMixin, - FlaxSchedulerOutput, - broadcast_to_shape_from_left, -) - - -@flax.struct.dataclass -class LMSDiscreteSchedulerState: - common: CommonSchedulerState - - # setable values - init_noise_sigma: jnp.ndarray - timesteps: jnp.ndarray - sigmas: jnp.ndarray - num_inference_steps: Optional[int] = None - - # running values - derivatives: Optional[jnp.ndarray] = None - - @classmethod - def create( - cls, common: CommonSchedulerState, init_noise_sigma: jnp.ndarray, timesteps: jnp.ndarray, sigmas: jnp.ndarray - ): - return cls(common=common, init_noise_sigma=init_noise_sigma, timesteps=timesteps, sigmas=sigmas) - - -@dataclass -class FlaxLMSSchedulerOutput(FlaxSchedulerOutput): - state: LMSDiscreteSchedulerState - - -class FlaxLMSDiscreteScheduler(FlaxSchedulerMixin, ConfigMixin): - """ - Linear Multistep Scheduler for discrete beta schedules. Based on the original k-diffusion implementation by - Katherine Crowson: - https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L181 - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. - beta_start (`float`): the starting `beta` value of inference. - beta_end (`float`): the final `beta` value. - beta_schedule (`str`): - the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from - `linear` or `scaled_linear`. - trained_betas (`jnp.ndarray`, optional): - option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc. - prediction_type (`str`, default `epsilon`, optional): - prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion - process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4 - https://imagen.research.google/video/paper.pdf) - dtype (`jnp.dtype`, *optional*, defaults to `jnp.float32`): - the `dtype` used for params and computation. - """ - - _compatibles = [e.name for e in FlaxKarrasDiffusionSchedulers] - - dtype: jnp.dtype - - @property - def has_state(self): - return True - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 1000, - beta_start: float = 0.0001, - beta_end: float = 0.02, - beta_schedule: str = "linear", - trained_betas: Optional[jnp.ndarray] = None, - prediction_type: str = "epsilon", - dtype: jnp.dtype = jnp.float32, - ): - self.dtype = dtype - - def create_state(self, common: Optional[CommonSchedulerState] = None) -> LMSDiscreteSchedulerState: - if common is None: - common = CommonSchedulerState.create(self) - - timesteps = jnp.arange(0, self.config.num_train_timesteps).round()[::-1] - sigmas = ((1 - common.alphas_cumprod) / common.alphas_cumprod) ** 0.5 - - # standard deviation of the initial noise distribution - init_noise_sigma = sigmas.max() - - return LMSDiscreteSchedulerState.create( - common=common, - init_noise_sigma=init_noise_sigma, - timesteps=timesteps, - sigmas=sigmas, - ) - - def scale_model_input(self, state: LMSDiscreteSchedulerState, sample: jnp.ndarray, timestep: int) -> jnp.ndarray: - """ - Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the K-LMS algorithm. - - Args: - state (`LMSDiscreteSchedulerState`): - the `FlaxLMSDiscreteScheduler` state data class instance. - sample (`jnp.ndarray`): - current instance of sample being created by diffusion process. - timestep (`int`): - current discrete timestep in the diffusion chain. - - Returns: - `jnp.ndarray`: scaled input sample - """ - (step_index,) = jnp.where(state.timesteps == timestep, size=1) - step_index = step_index[0] - - sigma = state.sigmas[step_index] - sample = sample / ((sigma**2 + 1) ** 0.5) - return sample - - def get_lms_coefficient(self, state: LMSDiscreteSchedulerState, order, t, current_order): - """ - Compute a linear multistep coefficient. - - Args: - order (TODO): - t (TODO): - current_order (TODO): - """ - - def lms_derivative(tau): - prod = 1.0 - for k in range(order): - if current_order == k: - continue - prod *= (tau - state.sigmas[t - k]) / (state.sigmas[t - current_order] - state.sigmas[t - k]) - return prod - - integrated_coeff = integrate.quad(lms_derivative, state.sigmas[t], state.sigmas[t + 1], epsrel=1e-4)[0] - - return integrated_coeff - - def set_timesteps( - self, state: LMSDiscreteSchedulerState, num_inference_steps: int, shape: Tuple = () - ) -> LMSDiscreteSchedulerState: - """ - Sets the timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - state (`LMSDiscreteSchedulerState`): - the `FlaxLMSDiscreteScheduler` state data class instance. - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - """ - - timesteps = jnp.linspace(self.config.num_train_timesteps - 1, 0, num_inference_steps, dtype=self.dtype) - - low_idx = jnp.floor(timesteps).astype(jnp.int32) - high_idx = jnp.ceil(timesteps).astype(jnp.int32) - - frac = jnp.mod(timesteps, 1.0) - - sigmas = ((1 - state.common.alphas_cumprod) / state.common.alphas_cumprod) ** 0.5 - sigmas = (1 - frac) * sigmas[low_idx] + frac * sigmas[high_idx] - sigmas = jnp.concatenate([sigmas, jnp.array([0.0], dtype=self.dtype)]) - - timesteps = timesteps.astype(jnp.int32) - - # initial running values - derivatives = jnp.zeros((0,) + shape, dtype=self.dtype) - - return state.replace( - timesteps=timesteps, - sigmas=sigmas, - num_inference_steps=num_inference_steps, - derivatives=derivatives, - ) - - def step( - self, - state: LMSDiscreteSchedulerState, - model_output: jnp.ndarray, - timestep: int, - sample: jnp.ndarray, - order: int = 4, - return_dict: bool = True, - ) -> Union[FlaxLMSSchedulerOutput, Tuple]: - """ - Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion - process from the learned model outputs (most often the predicted noise). - - Args: - state (`LMSDiscreteSchedulerState`): the `FlaxLMSDiscreteScheduler` state data class instance. - model_output (`jnp.ndarray`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - sample (`jnp.ndarray`): - current instance of sample being created by diffusion process. - order: coefficient for multi-step inference. - return_dict (`bool`): option for returning tuple rather than FlaxLMSSchedulerOutput class - - Returns: - [`FlaxLMSSchedulerOutput`] or `tuple`: [`FlaxLMSSchedulerOutput`] if `return_dict` is True, otherwise a - `tuple`. When returning a tuple, the first element is the sample tensor. - - """ - if state.num_inference_steps is None: - raise ValueError( - "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler" - ) - - sigma = state.sigmas[timestep] - - # 1. compute predicted original sample (x_0) from sigma-scaled predicted noise - if self.config.prediction_type == "epsilon": - pred_original_sample = sample - sigma * model_output - elif self.config.prediction_type == "v_prediction": - # * c_out + input * c_skip - pred_original_sample = model_output * (-sigma / (sigma**2 + 1) ** 0.5) + (sample / (sigma**2 + 1)) - else: - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, or `v_prediction`" - ) - - # 2. Convert to an ODE derivative - derivative = (sample - pred_original_sample) / sigma - state = state.replace(derivatives=jnp.append(state.derivatives, derivative)) - if len(state.derivatives) > order: - state = state.replace(derivatives=jnp.delete(state.derivatives, 0)) - - # 3. Compute linear multistep coefficients - order = min(timestep + 1, order) - lms_coeffs = [self.get_lms_coefficient(state, order, timestep, curr_order) for curr_order in range(order)] - - # 4. Compute previous sample based on the derivatives path - prev_sample = sample + sum( - coeff * derivative for coeff, derivative in zip(lms_coeffs, reversed(state.derivatives)) - ) - - if not return_dict: - return (prev_sample, state) - - return FlaxLMSSchedulerOutput(prev_sample=prev_sample, state=state) - - def add_noise( - self, - state: LMSDiscreteSchedulerState, - original_samples: jnp.ndarray, - noise: jnp.ndarray, - timesteps: jnp.ndarray, - ) -> jnp.ndarray: - sigma = state.sigmas[timesteps].flatten() - sigma = broadcast_to_shape_from_left(sigma, noise.shape) - - noisy_samples = original_samples + noise * sigma - - return noisy_samples - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/repaint/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/repaint/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_x101_64x4d_fpn_mdconv_c3-c5_mstrain_2x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_x101_64x4d_fpn_mdconv_c3-c5_mstrain_2x_coco.py deleted file mode 100644 index 2e19078e2830a2fa6dd2d3b703b0bbf711b7e1e4..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_x101_64x4d_fpn_mdconv_c3-c5_mstrain_2x_coco.py +++ /dev/null @@ -1,16 +0,0 @@ -_base_ = './vfnet_r50_fpn_mdconv_c3-c5_mstrain_2x_coco.py' -model = dict( - pretrained='open-mmlab://resnext101_64x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=64, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch', - dcn=dict(type='DCNv2', deform_groups=1, fallback_on_stride=False), - stage_with_dcn=(False, True, True, True))) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/emanet/emanet_r50-d8_769x769_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/emanet/emanet_r50-d8_769x769_80k_cityscapes.py deleted file mode 100644 index 699aa212c3518901b2f84db3f062c16b023c7538..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/emanet/emanet_r50-d8_769x769_80k_cityscapes.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = [ - '../_base_/models/emanet_r50-d8.py', - '../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_80k.py' -] -model = dict( - decode_head=dict(align_corners=True), - auxiliary_head=dict(align_corners=True), - test_cfg=dict(mode='slide', crop_size=(769, 769), stride=(513, 513))) diff --git a/spaces/Andyrasika/Andyrasika-lora_diffusion/app.py b/spaces/Andyrasika/Andyrasika-lora_diffusion/app.py deleted file mode 100644 index fc8342aab797271ca55ac55b9f77b5031afdc47c..0000000000000000000000000000000000000000 --- a/spaces/Andyrasika/Andyrasika-lora_diffusion/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Andyrasika/lora_diffusion").launch() \ No newline at end of file diff --git a/spaces/Arthur678/vits-uma-genshin-honkai/commons.py b/spaces/Arthur678/vits-uma-genshin-honkai/commons.py deleted file mode 100644 index 40fcc05364d4815971f5c6f9dbb8dcef8e3ec1e9..0000000000000000000000000000000000000000 --- a/spaces/Arthur678/vits-uma-genshin-honkai/commons.py +++ /dev/null @@ -1,172 +0,0 @@ -import math -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/__init__.py b/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/__init__.py deleted file mode 100644 index e3413961d1d184b99835eb1e919b052d70298bc6..0000000000000000000000000000000000000000 --- a/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/__init__.py +++ /dev/null @@ -1,18 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -from .GroundingDINO import build_groundingdino - - -def build_model(args): - # we use register to maintain models from catdet6 on. - from .registry import MODULE_BUILD_FUNCS - - assert args.modelname in MODULE_BUILD_FUNCS._module_dict - build_func = MODULE_BUILD_FUNCS.get(args.modelname) - model = build_func(args) - return model diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/connectionpool.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/connectionpool.py deleted file mode 100644 index c23d736b186f50eb723eebbd6dfce281d91c2353..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/connectionpool.py +++ /dev/null @@ -1,1110 +0,0 @@ -from __future__ import absolute_import - -import errno -import logging -import re -import socket -import sys -import warnings -from socket import error as SocketError -from socket import timeout as SocketTimeout - -from .connection import ( - BaseSSLError, - BrokenPipeError, - DummyConnection, - HTTPConnection, - HTTPException, - HTTPSConnection, - VerifiedHTTPSConnection, - port_by_scheme, -) -from .exceptions import ( - ClosedPoolError, - EmptyPoolError, - HeaderParsingError, - HostChangedError, - InsecureRequestWarning, - LocationValueError, - MaxRetryError, - NewConnectionError, - ProtocolError, - ProxyError, - ReadTimeoutError, - SSLError, - TimeoutError, -) -from .packages import six -from .packages.six.moves import queue -from .request import RequestMethods -from .response import HTTPResponse -from .util.connection import is_connection_dropped -from .util.proxy import connection_requires_http_tunnel -from .util.queue import LifoQueue -from .util.request import set_file_position -from .util.response import assert_header_parsing -from .util.retry import Retry -from .util.ssl_match_hostname import CertificateError -from .util.timeout import Timeout -from .util.url import Url, _encode_target -from .util.url import _normalize_host as normalize_host -from .util.url import get_host, parse_url - -xrange = six.moves.xrange - -log = logging.getLogger(__name__) - -_Default = object() - - -# Pool objects -class ConnectionPool(object): - """ - Base class for all connection pools, such as - :class:`.HTTPConnectionPool` and :class:`.HTTPSConnectionPool`. - - .. note:: - ConnectionPool.urlopen() does not normalize or percent-encode target URIs - which is useful if your target server doesn't support percent-encoded - target URIs. - """ - - scheme = None - QueueCls = LifoQueue - - def __init__(self, host, port=None): - if not host: - raise LocationValueError("No host specified.") - - self.host = _normalize_host(host, scheme=self.scheme) - self._proxy_host = host.lower() - self.port = port - - def __str__(self): - return "%s(host=%r, port=%r)" % (type(self).__name__, self.host, self.port) - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_val, exc_tb): - self.close() - # Return False to re-raise any potential exceptions - return False - - def close(self): - """ - Close all pooled connections and disable the pool. - """ - pass - - -# This is taken from http://hg.python.org/cpython/file/7aaba721ebc0/Lib/socket.py#l252 -_blocking_errnos = {errno.EAGAIN, errno.EWOULDBLOCK} - - -class HTTPConnectionPool(ConnectionPool, RequestMethods): - """ - Thread-safe connection pool for one host. - - :param host: - Host used for this HTTP Connection (e.g. "localhost"), passed into - :class:`http.client.HTTPConnection`. - - :param port: - Port used for this HTTP Connection (None is equivalent to 80), passed - into :class:`http.client.HTTPConnection`. - - :param strict: - Causes BadStatusLine to be raised if the status line can't be parsed - as a valid HTTP/1.0 or 1.1 status line, passed into - :class:`http.client.HTTPConnection`. - - .. note:: - Only works in Python 2. This parameter is ignored in Python 3. - - :param timeout: - Socket timeout in seconds for each individual connection. This can - be a float or integer, which sets the timeout for the HTTP request, - or an instance of :class:`urllib3.util.Timeout` which gives you more - fine-grained control over request timeouts. After the constructor has - been parsed, this is always a `urllib3.util.Timeout` object. - - :param maxsize: - Number of connections to save that can be reused. More than 1 is useful - in multithreaded situations. If ``block`` is set to False, more - connections will be created but they will not be saved once they've - been used. - - :param block: - If set to True, no more than ``maxsize`` connections will be used at - a time. When no free connections are available, the call will block - until a connection has been released. This is a useful side effect for - particular multithreaded situations where one does not want to use more - than maxsize connections per host to prevent flooding. - - :param headers: - Headers to include with all requests, unless other headers are given - explicitly. - - :param retries: - Retry configuration to use by default with requests in this pool. - - :param _proxy: - Parsed proxy URL, should not be used directly, instead, see - :class:`urllib3.ProxyManager` - - :param _proxy_headers: - A dictionary with proxy headers, should not be used directly, - instead, see :class:`urllib3.ProxyManager` - - :param \\**conn_kw: - Additional parameters are used to create fresh :class:`urllib3.connection.HTTPConnection`, - :class:`urllib3.connection.HTTPSConnection` instances. - """ - - scheme = "http" - ConnectionCls = HTTPConnection - ResponseCls = HTTPResponse - - def __init__( - self, - host, - port=None, - strict=False, - timeout=Timeout.DEFAULT_TIMEOUT, - maxsize=1, - block=False, - headers=None, - retries=None, - _proxy=None, - _proxy_headers=None, - _proxy_config=None, - **conn_kw - ): - ConnectionPool.__init__(self, host, port) - RequestMethods.__init__(self, headers) - - self.strict = strict - - if not isinstance(timeout, Timeout): - timeout = Timeout.from_float(timeout) - - if retries is None: - retries = Retry.DEFAULT - - self.timeout = timeout - self.retries = retries - - self.pool = self.QueueCls(maxsize) - self.block = block - - self.proxy = _proxy - self.proxy_headers = _proxy_headers or {} - self.proxy_config = _proxy_config - - # Fill the queue up so that doing get() on it will block properly - for _ in xrange(maxsize): - self.pool.put(None) - - # These are mostly for testing and debugging purposes. - self.num_connections = 0 - self.num_requests = 0 - self.conn_kw = conn_kw - - if self.proxy: - # Enable Nagle's algorithm for proxies, to avoid packet fragmentation. - # We cannot know if the user has added default socket options, so we cannot replace the - # list. - self.conn_kw.setdefault("socket_options", []) - - self.conn_kw["proxy"] = self.proxy - self.conn_kw["proxy_config"] = self.proxy_config - - def _new_conn(self): - """ - Return a fresh :class:`HTTPConnection`. - """ - self.num_connections += 1 - log.debug( - "Starting new HTTP connection (%d): %s:%s", - self.num_connections, - self.host, - self.port or "80", - ) - - conn = self.ConnectionCls( - host=self.host, - port=self.port, - timeout=self.timeout.connect_timeout, - strict=self.strict, - **self.conn_kw - ) - return conn - - def _get_conn(self, timeout=None): - """ - Get a connection. Will return a pooled connection if one is available. - - If no connections are available and :prop:`.block` is ``False``, then a - fresh connection is returned. - - :param timeout: - Seconds to wait before giving up and raising - :class:`urllib3.exceptions.EmptyPoolError` if the pool is empty and - :prop:`.block` is ``True``. - """ - conn = None - try: - conn = self.pool.get(block=self.block, timeout=timeout) - - except AttributeError: # self.pool is None - raise ClosedPoolError(self, "Pool is closed.") - - except queue.Empty: - if self.block: - raise EmptyPoolError( - self, - "Pool reached maximum size and no more connections are allowed.", - ) - pass # Oh well, we'll create a new connection then - - # If this is a persistent connection, check if it got disconnected - if conn and is_connection_dropped(conn): - log.debug("Resetting dropped connection: %s", self.host) - conn.close() - if getattr(conn, "auto_open", 1) == 0: - # This is a proxied connection that has been mutated by - # http.client._tunnel() and cannot be reused (since it would - # attempt to bypass the proxy) - conn = None - - return conn or self._new_conn() - - def _put_conn(self, conn): - """ - Put a connection back into the pool. - - :param conn: - Connection object for the current host and port as returned by - :meth:`._new_conn` or :meth:`._get_conn`. - - If the pool is already full, the connection is closed and discarded - because we exceeded maxsize. If connections are discarded frequently, - then maxsize should be increased. - - If the pool is closed, then the connection will be closed and discarded. - """ - try: - self.pool.put(conn, block=False) - return # Everything is dandy, done. - except AttributeError: - # self.pool is None. - pass - except queue.Full: - # This should never happen if self.block == True - log.warning( - "Connection pool is full, discarding connection: %s. Connection pool size: %s", - self.host, - self.pool.qsize(), - ) - # Connection never got put back into the pool, close it. - if conn: - conn.close() - - def _validate_conn(self, conn): - """ - Called right before a request is made, after the socket is created. - """ - pass - - def _prepare_proxy(self, conn): - # Nothing to do for HTTP connections. - pass - - def _get_timeout(self, timeout): - """Helper that always returns a :class:`urllib3.util.Timeout`""" - if timeout is _Default: - return self.timeout.clone() - - if isinstance(timeout, Timeout): - return timeout.clone() - else: - # User passed us an int/float. This is for backwards compatibility, - # can be removed later - return Timeout.from_float(timeout) - - def _raise_timeout(self, err, url, timeout_value): - """Is the error actually a timeout? Will raise a ReadTimeout or pass""" - - if isinstance(err, SocketTimeout): - raise ReadTimeoutError( - self, url, "Read timed out. (read timeout=%s)" % timeout_value - ) - - # See the above comment about EAGAIN in Python 3. In Python 2 we have - # to specifically catch it and throw the timeout error - if hasattr(err, "errno") and err.errno in _blocking_errnos: - raise ReadTimeoutError( - self, url, "Read timed out. (read timeout=%s)" % timeout_value - ) - - # Catch possible read timeouts thrown as SSL errors. If not the - # case, rethrow the original. We need to do this because of: - # http://bugs.python.org/issue10272 - if "timed out" in str(err) or "did not complete (read)" in str( - err - ): # Python < 2.7.4 - raise ReadTimeoutError( - self, url, "Read timed out. (read timeout=%s)" % timeout_value - ) - - def _make_request( - self, conn, method, url, timeout=_Default, chunked=False, **httplib_request_kw - ): - """ - Perform a request on a given urllib connection object taken from our - pool. - - :param conn: - a connection from one of our connection pools - - :param timeout: - Socket timeout in seconds for the request. This can be a - float or integer, which will set the same timeout value for - the socket connect and the socket read, or an instance of - :class:`urllib3.util.Timeout`, which gives you more fine-grained - control over your timeouts. - """ - self.num_requests += 1 - - timeout_obj = self._get_timeout(timeout) - timeout_obj.start_connect() - conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) - - # Trigger any extra validation we need to do. - try: - self._validate_conn(conn) - except (SocketTimeout, BaseSSLError) as e: - # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. - self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) - raise - - # conn.request() calls http.client.*.request, not the method in - # urllib3.request. It also calls makefile (recv) on the socket. - try: - if chunked: - conn.request_chunked(method, url, **httplib_request_kw) - else: - conn.request(method, url, **httplib_request_kw) - - # We are swallowing BrokenPipeError (errno.EPIPE) since the server is - # legitimately able to close the connection after sending a valid response. - # With this behaviour, the received response is still readable. - except BrokenPipeError: - # Python 3 - pass - except IOError as e: - # Python 2 and macOS/Linux - # EPIPE and ESHUTDOWN are BrokenPipeError on Python 2, and EPROTOTYPE is needed on macOS - # https://erickt.github.io/blog/2014/11/19/adventures-in-debugging-a-potential-osx-kernel-bug/ - if e.errno not in { - errno.EPIPE, - errno.ESHUTDOWN, - errno.EPROTOTYPE, - }: - raise - - # Reset the timeout for the recv() on the socket - read_timeout = timeout_obj.read_timeout - - # App Engine doesn't have a sock attr - if getattr(conn, "sock", None): - # In Python 3 socket.py will catch EAGAIN and return None when you - # try and read into the file pointer created by http.client, which - # instead raises a BadStatusLine exception. Instead of catching - # the exception and assuming all BadStatusLine exceptions are read - # timeouts, check for a zero timeout before making the request. - if read_timeout == 0: - raise ReadTimeoutError( - self, url, "Read timed out. (read timeout=%s)" % read_timeout - ) - if read_timeout is Timeout.DEFAULT_TIMEOUT: - conn.sock.settimeout(socket.getdefaulttimeout()) - else: # None or a value - conn.sock.settimeout(read_timeout) - - # Receive the response from the server - try: - try: - # Python 2.7, use buffering of HTTP responses - httplib_response = conn.getresponse(buffering=True) - except TypeError: - # Python 3 - try: - httplib_response = conn.getresponse() - except BaseException as e: - # Remove the TypeError from the exception chain in - # Python 3 (including for exceptions like SystemExit). - # Otherwise it looks like a bug in the code. - six.raise_from(e, None) - except (SocketTimeout, BaseSSLError, SocketError) as e: - self._raise_timeout(err=e, url=url, timeout_value=read_timeout) - raise - - # AppEngine doesn't have a version attr. - http_version = getattr(conn, "_http_vsn_str", "HTTP/?") - log.debug( - '%s://%s:%s "%s %s %s" %s %s', - self.scheme, - self.host, - self.port, - method, - url, - http_version, - httplib_response.status, - httplib_response.length, - ) - - try: - assert_header_parsing(httplib_response.msg) - except (HeaderParsingError, TypeError) as hpe: # Platform-specific: Python 3 - log.warning( - "Failed to parse headers (url=%s): %s", - self._absolute_url(url), - hpe, - exc_info=True, - ) - - return httplib_response - - def _absolute_url(self, path): - return Url(scheme=self.scheme, host=self.host, port=self.port, path=path).url - - def close(self): - """ - Close all pooled connections and disable the pool. - """ - if self.pool is None: - return - # Disable access to the pool - old_pool, self.pool = self.pool, None - - try: - while True: - conn = old_pool.get(block=False) - if conn: - conn.close() - - except queue.Empty: - pass # Done. - - def is_same_host(self, url): - """ - Check if the given ``url`` is a member of the same host as this - connection pool. - """ - if url.startswith("/"): - return True - - # TODO: Add optional support for socket.gethostbyname checking. - scheme, host, port = get_host(url) - if host is not None: - host = _normalize_host(host, scheme=scheme) - - # Use explicit default port for comparison when none is given - if self.port and not port: - port = port_by_scheme.get(scheme) - elif not self.port and port == port_by_scheme.get(scheme): - port = None - - return (scheme, host, port) == (self.scheme, self.host, self.port) - - def urlopen( - self, - method, - url, - body=None, - headers=None, - retries=None, - redirect=True, - assert_same_host=True, - timeout=_Default, - pool_timeout=None, - release_conn=None, - chunked=False, - body_pos=None, - **response_kw - ): - """ - Get a connection from the pool and perform an HTTP request. This is the - lowest level call for making a request, so you'll need to specify all - the raw details. - - .. note:: - - More commonly, it's appropriate to use a convenience method provided - by :class:`.RequestMethods`, such as :meth:`request`. - - .. note:: - - `release_conn` will only behave as expected if - `preload_content=False` because we want to make - `preload_content=False` the default behaviour someday soon without - breaking backwards compatibility. - - :param method: - HTTP request method (such as GET, POST, PUT, etc.) - - :param url: - The URL to perform the request on. - - :param body: - Data to send in the request body, either :class:`str`, :class:`bytes`, - an iterable of :class:`str`/:class:`bytes`, or a file-like object. - - :param headers: - Dictionary of custom headers to send, such as User-Agent, - If-None-Match, etc. If None, pool headers are used. If provided, - these headers completely replace any pool-specific headers. - - :param retries: - Configure the number of retries to allow before raising a - :class:`~urllib3.exceptions.MaxRetryError` exception. - - Pass ``None`` to retry until you receive a response. Pass a - :class:`~urllib3.util.retry.Retry` object for fine-grained control - over different types of retries. - Pass an integer number to retry connection errors that many times, - but no other types of errors. Pass zero to never retry. - - If ``False``, then retries are disabled and any exception is raised - immediately. Also, instead of raising a MaxRetryError on redirects, - the redirect response will be returned. - - :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. - - :param redirect: - If True, automatically handle redirects (status codes 301, 302, - 303, 307, 308). Each redirect counts as a retry. Disabling retries - will disable redirect, too. - - :param assert_same_host: - If ``True``, will make sure that the host of the pool requests is - consistent else will raise HostChangedError. When ``False``, you can - use the pool on an HTTP proxy and request foreign hosts. - - :param timeout: - If specified, overrides the default timeout for this one - request. It may be a float (in seconds) or an instance of - :class:`urllib3.util.Timeout`. - - :param pool_timeout: - If set and the pool is set to block=True, then this method will - block for ``pool_timeout`` seconds and raise EmptyPoolError if no - connection is available within the time period. - - :param release_conn: - If False, then the urlopen call will not release the connection - back into the pool once a response is received (but will release if - you read the entire contents of the response such as when - `preload_content=True`). This is useful if you're not preloading - the response's content immediately. You will need to call - ``r.release_conn()`` on the response ``r`` to return the connection - back into the pool. If None, it takes the value of - ``response_kw.get('preload_content', True)``. - - :param chunked: - If True, urllib3 will send the body using chunked transfer - encoding. Otherwise, urllib3 will send the body using the standard - content-length form. Defaults to False. - - :param int body_pos: - Position to seek to in file-like body in the event of a retry or - redirect. Typically this won't need to be set because urllib3 will - auto-populate the value when needed. - - :param \\**response_kw: - Additional parameters are passed to - :meth:`urllib3.response.HTTPResponse.from_httplib` - """ - - parsed_url = parse_url(url) - destination_scheme = parsed_url.scheme - - if headers is None: - headers = self.headers - - if not isinstance(retries, Retry): - retries = Retry.from_int(retries, redirect=redirect, default=self.retries) - - if release_conn is None: - release_conn = response_kw.get("preload_content", True) - - # Check host - if assert_same_host and not self.is_same_host(url): - raise HostChangedError(self, url, retries) - - # Ensure that the URL we're connecting to is properly encoded - if url.startswith("/"): - url = six.ensure_str(_encode_target(url)) - else: - url = six.ensure_str(parsed_url.url) - - conn = None - - # Track whether `conn` needs to be released before - # returning/raising/recursing. Update this variable if necessary, and - # leave `release_conn` constant throughout the function. That way, if - # the function recurses, the original value of `release_conn` will be - # passed down into the recursive call, and its value will be respected. - # - # See issue #651 [1] for details. - # - # [1] - release_this_conn = release_conn - - http_tunnel_required = connection_requires_http_tunnel( - self.proxy, self.proxy_config, destination_scheme - ) - - # Merge the proxy headers. Only done when not using HTTP CONNECT. We - # have to copy the headers dict so we can safely change it without those - # changes being reflected in anyone else's copy. - if not http_tunnel_required: - headers = headers.copy() - headers.update(self.proxy_headers) - - # Must keep the exception bound to a separate variable or else Python 3 - # complains about UnboundLocalError. - err = None - - # Keep track of whether we cleanly exited the except block. This - # ensures we do proper cleanup in finally. - clean_exit = False - - # Rewind body position, if needed. Record current position - # for future rewinds in the event of a redirect/retry. - body_pos = set_file_position(body, body_pos) - - try: - # Request a connection from the queue. - timeout_obj = self._get_timeout(timeout) - conn = self._get_conn(timeout=pool_timeout) - - conn.timeout = timeout_obj.connect_timeout - - is_new_proxy_conn = self.proxy is not None and not getattr( - conn, "sock", None - ) - if is_new_proxy_conn and http_tunnel_required: - self._prepare_proxy(conn) - - # Make the request on the httplib connection object. - httplib_response = self._make_request( - conn, - method, - url, - timeout=timeout_obj, - body=body, - headers=headers, - chunked=chunked, - ) - - # If we're going to release the connection in ``finally:``, then - # the response doesn't need to know about the connection. Otherwise - # it will also try to release it and we'll have a double-release - # mess. - response_conn = conn if not release_conn else None - - # Pass method to Response for length checking - response_kw["request_method"] = method - - # Import httplib's response into our own wrapper object - response = self.ResponseCls.from_httplib( - httplib_response, - pool=self, - connection=response_conn, - retries=retries, - **response_kw - ) - - # Everything went great! - clean_exit = True - - except EmptyPoolError: - # Didn't get a connection from the pool, no need to clean up - clean_exit = True - release_this_conn = False - raise - - except ( - TimeoutError, - HTTPException, - SocketError, - ProtocolError, - BaseSSLError, - SSLError, - CertificateError, - ) as e: - # Discard the connection for these exceptions. It will be - # replaced during the next _get_conn() call. - clean_exit = False - - def _is_ssl_error_message_from_http_proxy(ssl_error): - # We're trying to detect the message 'WRONG_VERSION_NUMBER' but - # SSLErrors are kinda all over the place when it comes to the message, - # so we try to cover our bases here! - message = " ".join(re.split("[^a-z]", str(ssl_error).lower())) - return ( - "wrong version number" in message or "unknown protocol" in message - ) - - # Try to detect a common user error with proxies which is to - # set an HTTP proxy to be HTTPS when it should be 'http://' - # (ie {'http': 'http://proxy', 'https': 'https://proxy'}) - # Instead we add a nice error message and point to a URL. - if ( - isinstance(e, BaseSSLError) - and self.proxy - and _is_ssl_error_message_from_http_proxy(e) - and conn.proxy - and conn.proxy.scheme == "https" - ): - e = ProxyError( - "Your proxy appears to only use HTTP and not HTTPS, " - "try changing your proxy URL to be HTTP. See: " - "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html" - "#https-proxy-error-http-proxy", - SSLError(e), - ) - elif isinstance(e, (BaseSSLError, CertificateError)): - e = SSLError(e) - elif isinstance(e, (SocketError, NewConnectionError)) and self.proxy: - e = ProxyError("Cannot connect to proxy.", e) - elif isinstance(e, (SocketError, HTTPException)): - e = ProtocolError("Connection aborted.", e) - - retries = retries.increment( - method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] - ) - retries.sleep() - - # Keep track of the error for the retry warning. - err = e - - finally: - if not clean_exit: - # We hit some kind of exception, handled or otherwise. We need - # to throw the connection away unless explicitly told not to. - # Close the connection, set the variable to None, and make sure - # we put the None back in the pool to avoid leaking it. - conn = conn and conn.close() - release_this_conn = True - - if release_this_conn: - # Put the connection back to be reused. If the connection is - # expired then it will be None, which will get replaced with a - # fresh connection during _get_conn. - self._put_conn(conn) - - if not conn: - # Try again - log.warning( - "Retrying (%r) after connection broken by '%r': %s", retries, err, url - ) - return self.urlopen( - method, - url, - body, - headers, - retries, - redirect, - assert_same_host, - timeout=timeout, - pool_timeout=pool_timeout, - release_conn=release_conn, - chunked=chunked, - body_pos=body_pos, - **response_kw - ) - - # Handle redirect? - redirect_location = redirect and response.get_redirect_location() - if redirect_location: - if response.status == 303: - method = "GET" - - try: - retries = retries.increment(method, url, response=response, _pool=self) - except MaxRetryError: - if retries.raise_on_redirect: - response.drain_conn() - raise - return response - - response.drain_conn() - retries.sleep_for_retry(response) - log.debug("Redirecting %s -> %s", url, redirect_location) - return self.urlopen( - method, - redirect_location, - body, - headers, - retries=retries, - redirect=redirect, - assert_same_host=assert_same_host, - timeout=timeout, - pool_timeout=pool_timeout, - release_conn=release_conn, - chunked=chunked, - body_pos=body_pos, - **response_kw - ) - - # Check if we should retry the HTTP response. - has_retry_after = bool(response.headers.get("Retry-After")) - if retries.is_retry(method, response.status, has_retry_after): - try: - retries = retries.increment(method, url, response=response, _pool=self) - except MaxRetryError: - if retries.raise_on_status: - response.drain_conn() - raise - return response - - response.drain_conn() - retries.sleep(response) - log.debug("Retry: %s", url) - return self.urlopen( - method, - url, - body, - headers, - retries=retries, - redirect=redirect, - assert_same_host=assert_same_host, - timeout=timeout, - pool_timeout=pool_timeout, - release_conn=release_conn, - chunked=chunked, - body_pos=body_pos, - **response_kw - ) - - return response - - -class HTTPSConnectionPool(HTTPConnectionPool): - """ - Same as :class:`.HTTPConnectionPool`, but HTTPS. - - :class:`.HTTPSConnection` uses one of ``assert_fingerprint``, - ``assert_hostname`` and ``host`` in this order to verify connections. - If ``assert_hostname`` is False, no verification is done. - - The ``key_file``, ``cert_file``, ``cert_reqs``, ``ca_certs``, - ``ca_cert_dir``, ``ssl_version``, ``key_password`` are only used if :mod:`ssl` - is available and are fed into :meth:`urllib3.util.ssl_wrap_socket` to upgrade - the connection socket into an SSL socket. - """ - - scheme = "https" - ConnectionCls = HTTPSConnection - - def __init__( - self, - host, - port=None, - strict=False, - timeout=Timeout.DEFAULT_TIMEOUT, - maxsize=1, - block=False, - headers=None, - retries=None, - _proxy=None, - _proxy_headers=None, - key_file=None, - cert_file=None, - cert_reqs=None, - key_password=None, - ca_certs=None, - ssl_version=None, - assert_hostname=None, - assert_fingerprint=None, - ca_cert_dir=None, - **conn_kw - ): - - HTTPConnectionPool.__init__( - self, - host, - port, - strict, - timeout, - maxsize, - block, - headers, - retries, - _proxy, - _proxy_headers, - **conn_kw - ) - - self.key_file = key_file - self.cert_file = cert_file - self.cert_reqs = cert_reqs - self.key_password = key_password - self.ca_certs = ca_certs - self.ca_cert_dir = ca_cert_dir - self.ssl_version = ssl_version - self.assert_hostname = assert_hostname - self.assert_fingerprint = assert_fingerprint - - def _prepare_conn(self, conn): - """ - Prepare the ``connection`` for :meth:`urllib3.util.ssl_wrap_socket` - and establish the tunnel if proxy is used. - """ - - if isinstance(conn, VerifiedHTTPSConnection): - conn.set_cert( - key_file=self.key_file, - key_password=self.key_password, - cert_file=self.cert_file, - cert_reqs=self.cert_reqs, - ca_certs=self.ca_certs, - ca_cert_dir=self.ca_cert_dir, - assert_hostname=self.assert_hostname, - assert_fingerprint=self.assert_fingerprint, - ) - conn.ssl_version = self.ssl_version - return conn - - def _prepare_proxy(self, conn): - """ - Establishes a tunnel connection through HTTP CONNECT. - - Tunnel connection is established early because otherwise httplib would - improperly set Host: header to proxy's IP:port. - """ - - conn.set_tunnel(self._proxy_host, self.port, self.proxy_headers) - - if self.proxy.scheme == "https": - conn.tls_in_tls_required = True - - conn.connect() - - def _new_conn(self): - """ - Return a fresh :class:`http.client.HTTPSConnection`. - """ - self.num_connections += 1 - log.debug( - "Starting new HTTPS connection (%d): %s:%s", - self.num_connections, - self.host, - self.port or "443", - ) - - if not self.ConnectionCls or self.ConnectionCls is DummyConnection: - raise SSLError( - "Can't connect to HTTPS URL because the SSL module is not available." - ) - - actual_host = self.host - actual_port = self.port - if self.proxy is not None: - actual_host = self.proxy.host - actual_port = self.proxy.port - - conn = self.ConnectionCls( - host=actual_host, - port=actual_port, - timeout=self.timeout.connect_timeout, - strict=self.strict, - cert_file=self.cert_file, - key_file=self.key_file, - key_password=self.key_password, - **self.conn_kw - ) - - return self._prepare_conn(conn) - - def _validate_conn(self, conn): - """ - Called right before a request is made, after the socket is created. - """ - super(HTTPSConnectionPool, self)._validate_conn(conn) - - # Force connect early to allow us to validate the connection. - if not getattr(conn, "sock", None): # AppEngine might not have `.sock` - conn.connect() - - if not conn.is_verified: - warnings.warn( - ( - "Unverified HTTPS request is being made to host '%s'. " - "Adding certificate verification is strongly advised. See: " - "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html" - "#ssl-warnings" % conn.host - ), - InsecureRequestWarning, - ) - - if getattr(conn, "proxy_is_verified", None) is False: - warnings.warn( - ( - "Unverified HTTPS connection done to an HTTPS proxy. " - "Adding certificate verification is strongly advised. See: " - "https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html" - "#ssl-warnings" - ), - InsecureRequestWarning, - ) - - -def connection_from_url(url, **kw): - """ - Given a url, return an :class:`.ConnectionPool` instance of its host. - - This is a shortcut for not having to parse out the scheme, host, and port - of the url before creating an :class:`.ConnectionPool` instance. - - :param url: - Absolute URL string that must include the scheme. Port is optional. - - :param \\**kw: - Passes additional parameters to the constructor of the appropriate - :class:`.ConnectionPool`. Useful for specifying things like - timeout, maxsize, headers, etc. - - Example:: - - >>> conn = connection_from_url('http://google.com/') - >>> r = conn.request('GET', '/') - """ - scheme, host, port = get_host(url) - port = port or port_by_scheme.get(scheme, 80) - if scheme == "https": - return HTTPSConnectionPool(host, port=port, **kw) - else: - return HTTPConnectionPool(host, port=port, **kw) - - -def _normalize_host(host, scheme): - """ - Normalize hosts for comparisons and use with sockets. - """ - - host = normalize_host(host, scheme) - - # httplib doesn't like it when we include brackets in IPv6 addresses - # Specifically, if we include brackets but also pass the port then - # httplib crazily doubles up the square brackets on the Host header. - # Instead, we need to make sure we never pass ``None`` as the port. - # However, for backward compatibility reasons we can't actually - # *assert* that. See http://bugs.python.org/issue28539 - if host.startswith("[") and host.endswith("]"): - host = host[1:-1] - return host diff --git a/spaces/AzumaSeren100/XuanShen-Bert-VITS2/modules.py b/spaces/AzumaSeren100/XuanShen-Bert-VITS2/modules.py deleted file mode 100644 index 92e0f32a51c472bfd1659a50a95a95d195281d2b..0000000000000000000000000000000000000000 --- a/spaces/AzumaSeren100/XuanShen-Bert-VITS2/modules.py +++ /dev/null @@ -1,452 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform -from attentions import Encoder - -LRELU_SLOPE = 0.1 - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x -class TransformerCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - n_layers, - n_heads, - p_dropout=0, - filter_channels=0, - mean_only=False, - wn_sharing_parameter=None, - gin_channels = 0 - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = Encoder(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = gin_channels) if wn_sharing_parameter is None else wn_sharing_parameter - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/BasToTheMax/TTS/README.md b/spaces/BasToTheMax/TTS/README.md deleted file mode 100644 index 1d5f578fbfafbdb135491df86623fce98e0b1cf7..0000000000000000000000000000000000000000 --- a/spaces/BasToTheMax/TTS/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: TTS -emoji: 🍕 -colorFrom: blue -colorTo: red -sdk: docker -pinned: false -license: other ---- -Hi diff --git a/spaces/Benson/text-generation/Examples/Car Park.md b/spaces/Benson/text-generation/Examples/Car Park.md deleted file mode 100644 index 6f801f6d95cd7d8789f96eb85949ebe64032e35e..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Car Park.md +++ /dev/null @@ -1,148 +0,0 @@ -
    -

    Aparcamiento: Consejos, trucos, tipos y beneficios

    -

    El aparcamiento es una habilidad esencial para cualquier conductor, pero también puede ser una fuente de frustración y estrés. Si usted está buscando un lugar en una ciudad llena de gente, tratando de adaptarse a su vehículo en un espacio reducido, o el uso de una nueva tecnología para aparcar su coche, es necesario saber algunos consejos y trucos para hacer su experiencia de estacionamiento más fácil y más seguro. En este artículo, exploraremos los diferentes tipos de sistemas de estacionamiento, sus ventajas y desventajas, y cómo usarlos de manera efectiva. También compartiremos algunos consejos útiles sobre cómo estacionar en un estacionamiento, ya sea perpendicular, en ángulo o paralelo.

    -

    Introducción

    -

    ¿Qué es el aparcamiento y por qué es importante?

    -

    El estacionamiento es el acto de colocar un vehículo en un área designada por un período de tiempo. El aparcamiento se puede hacer en la calle, en un garaje, en un lote o en una estructura. El aparcamiento es importante por varias razones:

    -

    car park


    DOWNLOADhttps://bltlly.com/2v6JuE



    -
      -
    • Ayuda a reducir la congestión del tráfico y la contaminación al minimizar el número de vehículos en la carretera.
    • -
    • Proporciona comodidad y accesibilidad para los conductores y pasajeros que necesitan llegar a sus destinos.
    • -
    • Garantiza la seguridad de los vehículos y sus propietarios al evitar robos, vandalismo o daños.
    • -
    • Genera ingresos para empresas, municipios u operadores que cobran tarifas por servicios de estacionamiento.
    • -
    -

    ¿Cuáles son los desafíos y soluciones comunes de estacionamiento de automóviles?

    -

    El aparcamiento también puede plantear algunos desafíos para los conductores, especialmente en las zonas urbanas donde el espacio es limitado y la demanda es alta. Algunos de los desafíos comunes de estacionamiento son:

    -
      -
    • Escasez de plazas de aparcamiento: Puede que no haya suficientes plazas de aparcamiento disponibles para el número de vehículos que las necesitan.
    • -
    • Tarifas de estacionamiento altas: El costo de estacionamiento puede ser demasiado caro para algunos conductores o disuadirlos de visitar ciertas áreas.
    • - -
    • Violaciones de estacionamiento: Algunos conductores pueden estacionar ilegal o incorrectamente, bloqueando otros vehículos o peatones.
    • -
    -

    Afortunadamente, hay algunas soluciones que pueden ayudar a abordar estos desafíos, como:

    -
      -
    • Gestión del estacionamiento: Esto implica la planificación, regulación y aplicación de políticas y prácticas de estacionamiento para optimizar el uso de los espacios y recursos existentes.
    • -
    • Guía de estacionamiento: Esto implica proporcionar información e instrucciones a los conductores sobre la disponibilidad y ubicación de las plazas de estacionamiento.
    • -
    • Tecnología de estacionamiento: Esto implica el uso de sistemas y dispositivos innovadores para automatizar, simplificar o mejorar el proceso de estacionamiento.
    • -
    -

    Consejos y trucos para aparcar coches

    -

    Cómo estacionar en un estacionamiento

    -

    Estacionar en un estacionamiento puede ser complicado si no sabes cómo maniobrar tu vehículo correctamente. Aquí hay algunos consejos y trucos sobre cómo estacionar en un estacionamiento dependiendo del tipo de espacio:

    -

    -

    Estacionamiento perpendicular

    -

    Esto es cuando estaciona su vehículo en un ángulo recto a la acera o la pared. Para hacer esto:

    -
      -
    1. Coloque el parachoques de su coche con la primera línea de la plaza de aparcamiento. Mantenga su vehículo lo más lejos posible del lado opuesto para que tenga más espacio para girar.
    2. -
    3. Levante los frenos y gire gradualmente el volante hacia la dirección del espacio.
    4. Continúe girando hasta que su coche esté alineado con el espacio. Asegúrese de no golpear la acera o los otros vehículos.
    5. -
    6. Enderezar las ruedas y avanzar lentamente hasta que el coche está completamente dentro del espacio. Deje suficiente espacio para que usted y los otros conductores abran las puertas.
    7. -
    8. Aparca tu coche y apaga el motor. Has aparcado tu coche correctamente perpendicularmente.
    9. -
    -

    Aparcamiento en ángulo

    -

    Esto es cuando estacionas tu vehículo en un ángulo de la acera o la pared. Para hacer esto:

    -
      - -
    1. Levante los frenos y gire suavemente el volante hacia la dirección del espacio.
    2. -
    3. Continúe girando hasta que su automóvil esté paralelo a las líneas del espacio. Asegúrese de no sobrepasar o subestimar el espacio.
    4. -
    5. Enderezar las ruedas y avanzar lentamente hasta que el coche está completamente dentro del espacio. Deje suficiente espacio para que usted y los otros conductores abran las puertas.
    6. -
    7. Ponga su coche en el parque y apague el motor. Usted ha aparcado con éxito su coche en un ángulo.
    8. -
    -

    Estacionamiento paralelo

    -

    Esto es cuando estacionas tu vehículo paralelo a la acera o a la pared. Para hacer esto:

    -
      -
    1. Encuentre un espacio de estacionamiento que sea lo suficientemente grande para su automóvil. Idealmente, debe ser al menos una vez y media la longitud de su automóvil.
    2. -
    3. Tire hacia arriba al lado del coche en frente del espacio. Alinee su parachoques trasero con su parachoques trasero y dejar unos dos pies de espacio entre sus coches.
    4. -
    5. Ponga su coche en marcha atrás y lentamente hacia arriba. Gire el volante todo el camino a la derecha (o izquierda, dependiendo de qué lado está estacionando).
    6. -
    7. Continúe retrocediendo hasta que su parachoques delantero esté más allá del parachoques trasero del automóvil frente a usted. Asegúrese de no golpear su automóvil o la acera.
    8. -
    9. Gire rápidamente el volante todo el camino a la izquierda (o derecha, dependiendo de qué lado está estacionando).
    10. -
    11. Continúe retrocediendo hasta que su automóvil esté paralelo a la acera o la pared. Asegúrese de no golpear el automóvil detrás de usted o ir demasiado atrás.
    12. -
    13. Enderece sus ruedas y ajuste su posición si es necesario. Deje suficiente espacio para que usted y los otros conductores salgan e ingresen sus autos.
    14. -
    15. Aparca tu coche y apaga el motor. Has aparcado tu coche de forma paralela.
    16. -
    -

    Cómo usar sistemas de estacionamiento automático

    - -

    APS completamente automatizado

    -

    Este tipo de APS puede aparcar su coche sin ninguna entrada de usted. Todo lo que necesita hacer es conducir su coche en un área designada, salir y activar el sistema con una tarjeta, un código o una aplicación de teléfono inteligente. El sistema escaneará su automóvil, lo levantará y lo transportará a un espacio de estacionamiento disponible utilizando una cinta transportadora, un brazo robótico o un transbordador. Cuando desee recuperar su coche, solo tiene que activar el sistema de nuevo con el mismo método, y traerá su coche de vuelta a usted en unos minutos.

    -

    APS semiautomático

    -

    Este tipo de APS puede ayudarle a estacionar su automóvil proporcionando orientación, control o asistencia. Todavía tiene que conducir su coche en un área designada, pero luego puede elegir una de estas opciones:

    -
    Sistema de estacionamiento de rompecabezas de elevación y deslizamiento
    -

    Este sistema utiliza un elevador hidráulico y una plataforma deslizante para mover y almacenar vehículos en una disposición similar a la red. Puede estacionar su automóvil en una plataforma vacía, salir y activar el sistema con una tarjeta, un código o una aplicación de teléfono inteligente. El sistema luego levantará y deslizará su automóvil a un espacio de estacionamiento disponible. Cuando desee recuperar su coche, solo tiene que activar el sistema de nuevo con el mismo método, y se levantará y deslizar su coche de nuevo a usted.

    -
    Sistema de estacionamiento en boxes
    -

    Este sistema utiliza un elevador vertical y una plataforma giratoria horizontal para mover y almacenar vehículos en un pozo subterráneo. Puede estacionar su automóvil en un tocadiscos vacío, salir y activar el sistema con una tarjeta, un código o una aplicación de teléfono inteligente. El sistema entonces bajará su coche en el hoyo y girarlo a un espacio de estacionamiento disponible. Cuando desee recuperar su coche, solo tiene que activar el sistema de nuevo con el mismo método, y se levantará y girar el coche de nuevo a usted.

    -

    Tipos de estacionamiento y beneficios

    -

    Ascensores de estacionamiento dependientes (apiladores de automóviles)

    - -

    Sistemas de estacionamiento semiautomáticos

    -

    Este tipo de sistema de estacionamiento utiliza una combinación de dispositivos mecánicos y electrónicos para mover y almacenar vehículos en una disposición horizontal o vertical. Puede aumentar la capacidad de estacionamiento de tres a seis veces, dependiendo del número de niveles y espacios. Es semi-independiente, lo que significa que algunos vehículos se puede acceder directamente, mientras que otros pueden requerir el movimiento de otros vehículos primero. Este tipo de sistema es adecuado para áreas de tráfico medio o estacionamiento a corto plazo.

    -

    Sistema de estacionamiento de rompecabezas de elevación y deslizamiento

    -

    Este es un tipo de sistema de estacionamiento semiautomático que ya hemos discutido en la sección anterior. Tiene los siguientes beneficios:

    -
      -
    • Es flexible y adaptable, ya que puede adaptarse a diferentes tamaños y formas de vehículos y espacios.
    • -
    • Es eficiente y rápido, ya que puede mover vehículos en pocos minutos.
    • -
    • Es seguro, ya que evita el acceso no autorizado y daños a los vehículos.
    • -
    -

    Sistema de estacionamiento en boxes

    -

    Este es otro tipo de sistema de estacionamiento semiautomático que ya hemos discutido en la sección anterior. Tiene los siguientes beneficios:

    -
      -
    • Ahorra espacio y es estético, ya que esconde los vehículos bajo tierra y preserva el paisaje.
    • -
    • Es ecológico y ahorra energía, ya que reduce las emisiones y la contaminación acústica y utiliza menos electricidad.
    • -
    • Es confiable y duradero, ya que tiene un bajo costo de mantenimiento y una larga vida útil.
    • -
    -

    Sistemas de estacionamiento totalmente automáticos

    -

    Este tipo de sistema de estacionamiento utiliza un proceso totalmente automatizado para mover y almacenar vehículos en una disposición horizontal o vertical. Puede aumentar la capacidad de estacionamiento de seis a diez veces, dependiendo del número de niveles y espacios. Es independiente, lo que significa que se puede acceder a cualquier vehículo en cualquier momento sin mover otros vehículos. Este tipo de sistema es adecuado para áreas de alto tráfico o estacionamiento premium.

    - -

    Este tipo de sistema de estacionamiento totalmente automático utiliza una plataforma giratoria para mover y almacenar vehículos en una disposición circular. Tiene los siguientes beneficios:

    -
      -
    • Es simple y conveniente, ya que requiere un espacio y operación mínimos.
    • -
    • Es económico y asequible, ya que tiene un bajo costo de instalación y un alto retorno de la inversión.
    • -
    • Es divertido y atractivo, ya que añade una característica única al edificio o al área.
    • -
    -

    Sistema de estacionamiento de la torre

    -

    Este tipo de sistema de estacionamiento totalmente automático utiliza un ascensor vertical para mover y almacenar vehículos en una estructura similar a una torre. Tiene los siguientes beneficios:

    -
      -
    • Es innovador y avanzado, ya que utiliza tecnología y diseño de vanguardia.
    • -
    • Es espacioso y cómodo, ya que proporciona un amplio espacio para cada vehículo y elimina el error humano.
    • -
    • Es inteligente e inteligente, ya que puede monitorear y controlar el estado y el rendimiento del estacionamiento.
    • -
    -

    Conclusión

    -

    Resumen de los puntos principales

    -

    En conclusión, el estacionamiento es una habilidad importante que todo conductor debe dominar. Puede ayudarle a ahorrar tiempo, dinero y energía, así como evitar accidentes, multas o estrés. En este artículo, hemos cubierto algunos consejos y trucos sobre cómo estacionar en un estacionamiento, ya sea perpendicular, en ángulo o paralelo. También hemos introducido algunos tipos de sistemas de aparcamiento, sus ventajas y desventajas, y cómo utilizarlos de manera eficaz. Esperamos que este artículo haya sido informativo y útil para usted.

    -

    Llamada a la acción

    -

    Si desea obtener más información sobre el estacionamiento de automóviles o encontrar las mejores soluciones de estacionamiento para sus necesidades, visite nuestro sitio web o contáctenos hoy. Somos expertos en sistemas de aparcamiento y podemos ofrecerle asesoramiento profesional, instalación, mantenimiento y soporte. ¡Esperamos saber de usted pronto!

    -

    Preguntas frecuentes

    -
      - -

      Los sistemas de estacionamiento automático pueden ofrecer muchos beneficios para los conductores, como:

      -
        -
      • Pueden ahorrar espacio, tiempo y energía optimizando el uso de las áreas de estacionamiento existentes y reduciendo la necesidad de intervención humana.
      • -
      • Pueden mejorar la seguridad y la protección al prevenir accidentes, robos, vandalismo o daños a vehículos.
      • -
      • Pueden mejorar la comodidad y el confort al simplificar el proceso de estacionamiento y eliminar la molestia de encontrar un lugar de estacionamiento.
      • -
      -
    1. ¿Cuáles son las desventajas de usar sistemas de estacionamiento automático?
    2. -

      Los sistemas de estacionamiento automático también pueden tener algunos inconvenientes, como:

      -
        -
      • Pueden ser costosos y complejos de instalar, operar y mantener.
      • -
      • Pueden ser propensos a fallas técnicas o mal funcionamiento que pueden afectar el rendimiento o la disponibilidad del estacionamiento.
      • -
      • Pueden ser incompatibles o inaccesibles para algunos vehículos o conductores que no tienen la tecnología o el equipo requerido.
      • -
      -
    3. ¿Cómo puedo mejorar mis habilidades de estacionamiento?
    4. -

      Las habilidades de estacionamiento de automóviles se pueden mejorar practicando regularmente y siguiendo algunos consejos, como:

      -
        -
      • Ajuste los espejos y el asiento para tener una visión clara de su entorno.
      • -
      • Utilice sus indicadores y compruebe sus puntos ciegos antes de girar o cambiar de carril.
      • -
      • Estacione en un área bien iluminada y segura que sea adecuada para el tamaño y tipo de su vehículo.
      • -
      • Utilice puntos de referencia y directrices para alinear su vehículo con el espacio de estacionamiento.
      • -
      • Deja suficiente espacio entre tu coche y los otros vehículos u obstáculos.
      • -
      -
    5. ¿Cuáles son algunas reglas de etiqueta de estacionamiento que debo seguir?
    6. -

      Las reglas de etiqueta de estacionamiento son algunas normas no escritas que pueden ayudarlo a respetar a otros conductores y evitar conflictos, como:

      -
        -
      • Estacione dentro de las líneas del espacio de estacionamiento y no ocupe más de un espacio.
      • -
      • No estacione en espacios reservados, deshabilitados o de emergencia a menos que esté autorizado a hacerlo.
      • - -
      • No tocar la bocina, rev, o reproducir música fuerte mientras se estaciona o espera un espacio.
      • -
      • No deje objetos de valor o basura en su coche o en el suelo.
      • -
      -
    7. ¿Dónde puedo encontrar más información sobre el aparcamiento?
    8. -

      Puede encontrar más información sobre el estacionamiento de coches visitando nuestro sitio web o contactándonos hoy. Tenemos una gran cantidad de recursos y expertos que pueden ayudarle con cualquier pregunta o necesidades de estacionamiento de automóviles. También podemos proporcionarle las mejores soluciones de estacionamiento para su situación específica. ¡Estamos encantados de ayudarle de cualquier manera que podamos!

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Destino Final Mod Apk.md b/spaces/Benson/text-generation/Examples/Descargar Destino Final Mod Apk.md deleted file mode 100644 index 1703f8c9610763d923be4dc0eeefafd2b515bb99..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Destino Final Mod Apk.md +++ /dev/null @@ -1,58 +0,0 @@ - -

      Descargar Final Destiny Mod APK: Un juego de aventura de fantasía

      -

      Si estás buscando un emocionante e inmersivo juego de aventura de fantasía, deberías probar Final Destiny. Este juego te llevará a un mundo donde tú y una guerrera tienen que luchar contra varios enemigos y peligros después de rescatar a un bebé. También podrá disfrutar de los impresionantes gráficos, los controles suaves y la banda sonora épica de este juego. ¿Pero qué pasa si quieres divertirte más y ser más cómodo mientras juegas a Final Destiny? Bueno, se puede descargar Final Destiny Mod APK y obtener acceso a dinero ilimitado, modo dios, y no hay anuncios. En este artículo, te diremos todo lo que necesitas saber sobre Final Destiny y su versión mod apk.

      -

      ¿Qué es el destino final?

      -

      Final Destiny es un juego de acción y aventura desarrollado por YEMA y lanzado en 2020. Tiene más de 1 millón de descargas en Google Play Store y una calificación de 4.4 de 5 estrellas. El juego es compatible con dispositivos Android con la versión 4.4 o superior.

      -

      descargar destino final mod apk


      Download Ziphttps://bltlly.com/2v6MfK



      -

      La historia del destino final

      -

      El juego comienza con un evento misterioso que causa una gran explosión en el cielo. Eres una guerrera que está cerca de la escena y es testigo del desastre. También encuentras una niña que milagrosamente sobrevive a la explosión. Decides llevarla contigo y protegerla de los peligros que acechan en este mundo caótico. En el camino, te encontrarás con muchos enemigos, como monstruos, robots, zombies y alienígenas. También descubrirá los secretos detrás de la explosión y el origen de la niña.

      -

      El juego de Final Destiny

      - -

      Las características de Final Destiny

      -

      Algunas de las características que hacen de Final Destiny un juego increíble son:

      -
        -
      • Impresionantes gráficos y animaciones que crean un mundo de fantasía realista e inmersivo.
      • -
      • Controles suaves y física que le permiten realizar varias acciones y movimientos.
      • -
      • Banda sonora épica y efectos de sonido que mejoran la atmósfera y el estado de ánimo del juego.
      • -
      • Diversos enemigos y jefes que desafían tus habilidades y estrategias.
      • -
      • Múltiples armas y habilidades que se adaptan a sus preferencias y estilo de juego.
      • -
      • Varios trajes y accesorios que te permiten personalizar el aspecto de tu personaje.
      • -
      • Interfaz de usuario simple e intuitiva que hace que el juego sea fácil de navegar y jugar.
      • -
      -

      ¿Por qué descargar Final Destiny Mod APK?

      -

      Aunque Final Destiny es un juego divertido y emocionante, también tiene algunas limitaciones y desventajas que pueden afectar tu experiencia de juego. Por ejemplo, puedes quedarte sin dinero o gemas para mejorar tus armas o habilidades. También puedes encontrar algunas etapas o niveles demasiado difíciles o frustrantes para completarlos. También puedes enojarte con los anuncios que aparecen de vez en cuando. Es por eso que es posible que desee descargar Final Destiny Mod APK en lugar de la versión original. Esta versión apk mod le dará algunas ventajas y beneficios que harán que su experiencia de juego más agradable y conveniente.

      -

      Dinero ilimitado

      -

      Con Final Destiny Mod APK, usted tendrá dinero ilimitado en su cuenta. Esto significa que puede comprar cualquier arma o habilidad que desee sin preocuparse por el costo o la disponibilidad. También puede actualizar sus armas y habilidades al máximo nivel sin ningún tipo de molestia. Esto hará que tu personaje sea más poderoso y capaz de derrotar a cualquier enemigo o jefe.

      -

      Modo de Dios

      - -

      No hay anuncios

      -

      Con Final Destiny Mod APK, también se deshará de los molestos anuncios que interrumpen su juego. No verás ningún banner, pop-ups o videos que intenten venderte algo o hacerte ver algo. Tampoco tendrá que esperar a que ningún temporizador o cuenta atrás para reanudar su juego. Esto hará que tu juego sea más ininterrumpido y agradable, ya que puedes jugar sin distracciones ni retrasos.

      -

      ¿Cómo descargar e instalar Final Destiny Mod APK?

      -

      Si está interesado en descargar e instalar Final Destiny Mod APK, puede seguir estos sencillos pasos:

      -

      Paso 1: Descargar el archivo apk mod de una fuente de confianza

      -

      Lo primero que tienes que hacer es encontrar un sitio web confiable y seguro que ofrece el archivo apk mod de Final Destiny. Puede buscarlo en Google o utilizar el enlace que proporcionamos a continuación. Asegúrese de que el sitio web sea seguro y tenga comentarios positivos de otros usuarios. Evite descargar de fuentes desconocidas o sospechosas que puedan contener virus o malware.

      -

      Una vez que encuentre el sitio web, haga clic en el botón de descarga y espere a que el archivo se descargue en su dispositivo. El tamaño del archivo es de unos 100 MB, así que asegúrese de tener suficiente espacio de almacenamiento y una conexión a Internet estable.

      -

      -

      Paso 2: Habilitar fuentes desconocidas en el dispositivo

      -

      Lo siguiente que debe hacer es habilitar fuentes desconocidas en su dispositivo. Esta es una configuración de seguridad que le permite instalar aplicaciones desde fuentes distintas de Google Play Store. Para hacer esto, vaya a la configuración del dispositivo y busque la opción de seguridad o privacidad. Luego, busque la opción de fuentes desconocidas y conéctela. Puede ver un mensaje de advertencia que le informa sobre los riesgos de instalar aplicaciones desde fuentes desconocidas. Simplemente ignórelo y confirme su elección.

      -

      Paso 3: Instalar el archivo apk mod y disfrutar del juego

      - -

      Una vez realizada la instalación, puedes abrir el juego y empezar a jugar con dinero ilimitado, modo dios y sin anuncios. ¡Diviértete!

      -

      Conclusión

      -

      Final Destiny es un juego de aventura de fantasía que te llevará a un mundo donde tendrás que luchar contra varios enemigos y peligros después de rescatar a una niña. Usted podrá disfrutar de los gráficos impresionantes, los controles suaves, y la banda sonora épica de este juego. Pero si quieres tener más diversión y comodidad mientras juegas Final Destiny, puedes descargar Final Destiny Mod APK y obtener acceso a dinero ilimitado, modo dios, y sin anuncios. Esta versión apk mod hará que su experiencia de juego más agradable y conveniente.

      -

      Esperamos que este artículo te haya ayudado a aprender todo lo que necesitas saber sobre Final Destiny y su versión mod apk. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. ¡Gracias por leer!

      -

      Preguntas frecuentes

      -
        -
      • ¿Es seguro descargar e instalar Final Destiny Mod APK?
      • -

        Sí, Final Destiny Mod APK es seguro para descargar e instalar siempre y cuando lo obtenga de una fuente de confianza. Lo hemos probado nosotros mismos y no encontramos virus o malware en él. Sin embargo, todavía recomendamos que lo escanee con una aplicación antivirus antes de instalarlo.

        -
      • ¿Es Final Destiny Mod APK compatible con mi dispositivo?
      • -

        Final Destiny Mod APK es compatible con dispositivos Android con la versión 4.4 o superior. Sin embargo, es posible que algunos dispositivos no admitan algunas características o funciones del juego debido a limitaciones de hardware o problemas de software.

        -
      • ¿Puedo jugar Final Destiny Mod APK en línea con otros jugadores?
      • -

        No, Final Destiny Mod APK no es un juego en línea. Es un juego para un solo jugador que se puede jugar sin conexión a Internet. También puedes reproducirlo en un emulador en tu PC o portátil.

        -
      • ¿Puedo actualizar Final Destiny Mod APK a la última versión?
      • - -
      • ¿Puedo usar Final Destiny Mod APK con la versión original del juego?
      • -

        No, no se puede utilizar Final Destiny Mod APK con la versión original del juego. Usted tiene que desinstalar la versión original del juego antes de instalar el archivo apk mod. De lo contrario, puede encontrar errores o conflictos que pueden impedir que el juego se ejecute correctamente.

        -

      64aa2da5cf
      -
      -
      \ No newline at end of file diff --git a/spaces/BernardoOlisan/vqganclip/taming-transformers/taming/modules/misc/coord.py b/spaces/BernardoOlisan/vqganclip/taming-transformers/taming/modules/misc/coord.py deleted file mode 100644 index ee69b0c897b6b382ae673622e420f55e494f5b09..0000000000000000000000000000000000000000 --- a/spaces/BernardoOlisan/vqganclip/taming-transformers/taming/modules/misc/coord.py +++ /dev/null @@ -1,31 +0,0 @@ -import torch - -class CoordStage(object): - def __init__(self, n_embed, down_factor): - self.n_embed = n_embed - self.down_factor = down_factor - - def eval(self): - return self - - def encode(self, c): - """fake vqmodel interface""" - assert 0.0 <= c.min() and c.max() <= 1.0 - b,ch,h,w = c.shape - assert ch == 1 - - c = torch.nn.functional.interpolate(c, scale_factor=1/self.down_factor, - mode="area") - c = c.clamp(0.0, 1.0) - c = self.n_embed*c - c_quant = c.round() - c_ind = c_quant.to(dtype=torch.long) - - info = None, None, c_ind - return c_quant, None, info - - def decode(self, c): - c = c/self.n_embed - c = torch.nn.functional.interpolate(c, scale_factor=self.down_factor, - mode="nearest") - return c diff --git a/spaces/BetterAPI/BetterChat/src/styles/main.css b/spaces/BetterAPI/BetterChat/src/styles/main.css deleted file mode 100644 index 6ea57c50974dab960f23ce8440bfd576f10ddb52..0000000000000000000000000000000000000000 --- a/spaces/BetterAPI/BetterChat/src/styles/main.css +++ /dev/null @@ -1,17 +0,0 @@ -@import "./highlight-js.css"; - -@tailwind base; -@tailwind components; -@tailwind utilities; - -@layer components { - .btn { - @apply inline-flex flex-shrink-0 cursor-pointer select-none items-center justify-center whitespace-nowrap outline-none transition-all focus:ring disabled:cursor-default; - } -} - -@layer utilities { - .scrollbar-custom { - @apply scrollbar-thin scrollbar-track-transparent scrollbar-thumb-black/10 scrollbar-thumb-rounded-full scrollbar-w-1 hover:scrollbar-thumb-black/20 dark:scrollbar-thumb-white/10 dark:hover:scrollbar-thumb-white/20; - } -} diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/complex/cpow.h b/spaces/CVPR/LIVE/thrust/thrust/detail/complex/cpow.h deleted file mode 100644 index 2d6ad051eb18b47cb628a1673e64ba6584d52de8..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/complex/cpow.h +++ /dev/null @@ -1,55 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * Copyright 2013 Filipe RNC Maia - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include - -namespace thrust { - -template -__host__ __device__ -complex::type> -pow(const complex& x, const complex& y) -{ - typedef typename detail::promoted_numerical_type::type T; - return exp(log(complex(x)) * complex(y)); -} - -template -__host__ __device__ -complex::type> -pow(const complex& x, const T1& y) -{ - typedef typename detail::promoted_numerical_type::type T; - return exp(log(complex(x)) * T(y)); -} - -template -__host__ __device__ -complex::type> -pow(const T0& x, const complex& y) -{ - typedef typename detail::promoted_numerical_type::type T; - // Find `log` by ADL. - using std::log; - return exp(log(T(x)) * complex(y)); -} - -} // end namespace thrust - diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/dependencies_aware_execution_policy.h b/spaces/CVPR/LIVE/thrust/thrust/detail/dependencies_aware_execution_policy.h deleted file mode 100644 index 1806276f9d65c37d24ec7a3c83c6e9d32735117f..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/dependencies_aware_execution_policy.h +++ /dev/null @@ -1,105 +0,0 @@ -/* - * Copyright 2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include - -#if THRUST_CPP_DIALECT >= 2011 - -#include - -#include - -namespace thrust -{ -namespace detail -{ - -template class ExecutionPolicyCRTPBase> -struct dependencies_aware_execution_policy -{ - template - __host__ - thrust::detail::execute_with_dependencies< - ExecutionPolicyCRTPBase, - Dependencies... - > - after(Dependencies&& ...dependencies) const - { - return { capture_as_dependency(THRUST_FWD(dependencies))... }; - } - - template - __host__ - thrust::detail::execute_with_dependencies< - ExecutionPolicyCRTPBase, - Dependencies... - > - after(std::tuple& dependencies) const - { - return { capture_as_dependency(dependencies) }; - } - template - __host__ - thrust::detail::execute_with_dependencies< - ExecutionPolicyCRTPBase, - Dependencies... - > - after(std::tuple&& dependencies) const - { - return { capture_as_dependency(std::move(dependencies)) }; - } - - template - __host__ - thrust::detail::execute_with_dependencies< - ExecutionPolicyCRTPBase, - Dependencies... - > - rebind_after(Dependencies&& ...dependencies) const - { - return { capture_as_dependency(THRUST_FWD(dependencies))... }; - } - - template - __host__ - thrust::detail::execute_with_dependencies< - ExecutionPolicyCRTPBase, - Dependencies... - > - rebind_after(std::tuple& dependencies) const - { - return { capture_as_dependency(dependencies) }; - } - template - __host__ - thrust::detail::execute_with_dependencies< - ExecutionPolicyCRTPBase, - Dependencies... - > - rebind_after(std::tuple&& dependencies) const - { - return { capture_as_dependency(std::move(dependencies)) }; - } -}; - -} // end detail -} // end thrust - -#endif // THRUST_CPP_DIALECT >= 2011 - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/unique.h b/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/unique.h deleted file mode 100644 index 2ff23e9d3db060d3284c743a1178b20069026575..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/unique.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system inherits unique -#include - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/uninitialized_copy.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/uninitialized_copy.h deleted file mode 100644 index a13b18aa8d73dae57b450a0a53e2fb97de2165ea..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/uninitialized_copy.h +++ /dev/null @@ -1,44 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a fill of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// the purpose of this header is to #include the uninitialized_copy.h header -// of the sequential, host, and device systems. It should be #included in any -// code which uses adl to dispatch uninitialized_copy - -#include - -// SCons can't see through the #defines below to figure out what this header -// includes, so we fake it out by specifying all possible files we might end up -// including inside an #if 0. -#if 0 -#include -#include -#include -#include -#endif - -#define __THRUST_HOST_SYSTEM_UNINITIALIZED_COPY_HEADER <__THRUST_HOST_SYSTEM_ROOT/detail/uninitialized_copy.h> -#include __THRUST_HOST_SYSTEM_UNINITIALIZED_COPY_HEADER -#undef __THRUST_HOST_SYSTEM_UNINITIALIZED_COPY_HEADER - -#define __THRUST_DEVICE_SYSTEM_UNINITIALIZED_COPY_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/detail/uninitialized_copy.h> -#include __THRUST_DEVICE_SYSTEM_UNINITIALIZED_COPY_HEADER -#undef __THRUST_DEVICE_SYSTEM_UNINITIALIZED_COPY_HEADER - diff --git a/spaces/CVPR/regionclip-demo/detectron2/evaluation/pascal_voc_evaluation.py b/spaces/CVPR/regionclip-demo/detectron2/evaluation/pascal_voc_evaluation.py deleted file mode 100644 index 1d1abcde2f87bb5f103e73cb364aaabbecb6e619..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/evaluation/pascal_voc_evaluation.py +++ /dev/null @@ -1,300 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -import logging -import numpy as np -import os -import tempfile -import xml.etree.ElementTree as ET -from collections import OrderedDict, defaultdict -from functools import lru_cache -import torch - -from detectron2.data import MetadataCatalog -from detectron2.utils import comm -from detectron2.utils.file_io import PathManager - -from .evaluator import DatasetEvaluator - - -class PascalVOCDetectionEvaluator(DatasetEvaluator): - """ - Evaluate Pascal VOC style AP for Pascal VOC dataset. - It contains a synchronization, therefore has to be called from all ranks. - - Note that the concept of AP can be implemented in different ways and may not - produce identical results. This class mimics the implementation of the official - Pascal VOC Matlab API, and should produce similar but not identical results to the - official API. - """ - - def __init__(self, dataset_name): - """ - Args: - dataset_name (str): name of the dataset, e.g., "voc_2007_test" - """ - self._dataset_name = dataset_name - meta = MetadataCatalog.get(dataset_name) - - # Too many tiny files, download all to local for speed. - annotation_dir_local = PathManager.get_local_path( - os.path.join(meta.dirname, "Annotations/") - ) - self._anno_file_template = os.path.join(annotation_dir_local, "{}.xml") - self._image_set_path = os.path.join(meta.dirname, "ImageSets", "Main", meta.split + ".txt") - self._class_names = meta.thing_classes - assert meta.year in [2007, 2012], meta.year - self._is_2007 = meta.year == 2007 - self._cpu_device = torch.device("cpu") - self._logger = logging.getLogger(__name__) - - def reset(self): - self._predictions = defaultdict(list) # class name -> list of prediction strings - - def process(self, inputs, outputs): - for input, output in zip(inputs, outputs): - image_id = input["image_id"] - instances = output["instances"].to(self._cpu_device) - boxes = instances.pred_boxes.tensor.numpy() - scores = instances.scores.tolist() - classes = instances.pred_classes.tolist() - for box, score, cls in zip(boxes, scores, classes): - xmin, ymin, xmax, ymax = box - # The inverse of data loading logic in `datasets/pascal_voc.py` - xmin += 1 - ymin += 1 - self._predictions[cls].append( - f"{image_id} {score:.3f} {xmin:.1f} {ymin:.1f} {xmax:.1f} {ymax:.1f}" - ) - - def evaluate(self): - """ - Returns: - dict: has a key "segm", whose value is a dict of "AP", "AP50", and "AP75". - """ - all_predictions = comm.gather(self._predictions, dst=0) - if not comm.is_main_process(): - return - predictions = defaultdict(list) - for predictions_per_rank in all_predictions: - for clsid, lines in predictions_per_rank.items(): - predictions[clsid].extend(lines) - del all_predictions - - self._logger.info( - "Evaluating {} using {} metric. " - "Note that results do not use the official Matlab API.".format( - self._dataset_name, 2007 if self._is_2007 else 2012 - ) - ) - - with tempfile.TemporaryDirectory(prefix="pascal_voc_eval_") as dirname: - res_file_template = os.path.join(dirname, "{}.txt") - - aps = defaultdict(list) # iou -> ap per class - for cls_id, cls_name in enumerate(self._class_names): - lines = predictions.get(cls_id, [""]) - - with open(res_file_template.format(cls_name), "w") as f: - f.write("\n".join(lines)) - - for thresh in range(50, 100, 5): - rec, prec, ap = voc_eval( - res_file_template, - self._anno_file_template, - self._image_set_path, - cls_name, - ovthresh=thresh / 100.0, - use_07_metric=self._is_2007, - ) - aps[thresh].append(ap * 100) - - ret = OrderedDict() - mAP = {iou: np.mean(x) for iou, x in aps.items()} - ret["bbox"] = {"AP": np.mean(list(mAP.values())), "AP50": mAP[50], "AP75": mAP[75]} - return ret - - -############################################################################## -# -# Below code is modified from -# https://github.com/rbgirshick/py-faster-rcnn/blob/master/lib/datasets/voc_eval.py -# -------------------------------------------------------- -# Fast/er R-CNN -# Licensed under The MIT License [see LICENSE for details] -# Written by Bharath Hariharan -# -------------------------------------------------------- - -"""Python implementation of the PASCAL VOC devkit's AP evaluation code.""" - - -@lru_cache(maxsize=None) -def parse_rec(filename): - """Parse a PASCAL VOC xml file.""" - with PathManager.open(filename) as f: - tree = ET.parse(f) - objects = [] - for obj in tree.findall("object"): - obj_struct = {} - obj_struct["name"] = obj.find("name").text - obj_struct["pose"] = obj.find("pose").text - obj_struct["truncated"] = int(obj.find("truncated").text) - obj_struct["difficult"] = int(obj.find("difficult").text) - bbox = obj.find("bndbox") - obj_struct["bbox"] = [ - int(bbox.find("xmin").text), - int(bbox.find("ymin").text), - int(bbox.find("xmax").text), - int(bbox.find("ymax").text), - ] - objects.append(obj_struct) - - return objects - - -def voc_ap(rec, prec, use_07_metric=False): - """Compute VOC AP given precision and recall. If use_07_metric is true, uses - the VOC 07 11-point method (default:False). - """ - if use_07_metric: - # 11 point metric - ap = 0.0 - for t in np.arange(0.0, 1.1, 0.1): - if np.sum(rec >= t) == 0: - p = 0 - else: - p = np.max(prec[rec >= t]) - ap = ap + p / 11.0 - else: - # correct AP calculation - # first append sentinel values at the end - mrec = np.concatenate(([0.0], rec, [1.0])) - mpre = np.concatenate(([0.0], prec, [0.0])) - - # compute the precision envelope - for i in range(mpre.size - 1, 0, -1): - mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i]) - - # to calculate area under PR curve, look for points - # where X axis (recall) changes value - i = np.where(mrec[1:] != mrec[:-1])[0] - - # and sum (\Delta recall) * prec - ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) - return ap - - -def voc_eval(detpath, annopath, imagesetfile, classname, ovthresh=0.5, use_07_metric=False): - """rec, prec, ap = voc_eval(detpath, - annopath, - imagesetfile, - classname, - [ovthresh], - [use_07_metric]) - - Top level function that does the PASCAL VOC evaluation. - - detpath: Path to detections - detpath.format(classname) should produce the detection results file. - annopath: Path to annotations - annopath.format(imagename) should be the xml annotations file. - imagesetfile: Text file containing the list of images, one image per line. - classname: Category name (duh) - [ovthresh]: Overlap threshold (default = 0.5) - [use_07_metric]: Whether to use VOC07's 11 point AP computation - (default False) - """ - # assumes detections are in detpath.format(classname) - # assumes annotations are in annopath.format(imagename) - # assumes imagesetfile is a text file with each line an image name - - # first load gt - # read list of images - with PathManager.open(imagesetfile, "r") as f: - lines = f.readlines() - imagenames = [x.strip() for x in lines] - - # load annots - recs = {} - for imagename in imagenames: - recs[imagename] = parse_rec(annopath.format(imagename)) - - # extract gt objects for this class - class_recs = {} - npos = 0 - for imagename in imagenames: - R = [obj for obj in recs[imagename] if obj["name"] == classname] - bbox = np.array([x["bbox"] for x in R]) - difficult = np.array([x["difficult"] for x in R]).astype(np.bool) - # difficult = np.array([False for x in R]).astype(np.bool) # treat all "difficult" as GT - det = [False] * len(R) - npos = npos + sum(~difficult) - class_recs[imagename] = {"bbox": bbox, "difficult": difficult, "det": det} - - # read dets - detfile = detpath.format(classname) - with open(detfile, "r") as f: - lines = f.readlines() - - splitlines = [x.strip().split(" ") for x in lines] - image_ids = [x[0] for x in splitlines] - confidence = np.array([float(x[1]) for x in splitlines]) - BB = np.array([[float(z) for z in x[2:]] for x in splitlines]).reshape(-1, 4) - - # sort by confidence - sorted_ind = np.argsort(-confidence) - BB = BB[sorted_ind, :] - image_ids = [image_ids[x] for x in sorted_ind] - - # go down dets and mark TPs and FPs - nd = len(image_ids) - tp = np.zeros(nd) - fp = np.zeros(nd) - for d in range(nd): - R = class_recs[image_ids[d]] - bb = BB[d, :].astype(float) - ovmax = -np.inf - BBGT = R["bbox"].astype(float) - - if BBGT.size > 0: - # compute overlaps - # intersection - ixmin = np.maximum(BBGT[:, 0], bb[0]) - iymin = np.maximum(BBGT[:, 1], bb[1]) - ixmax = np.minimum(BBGT[:, 2], bb[2]) - iymax = np.minimum(BBGT[:, 3], bb[3]) - iw = np.maximum(ixmax - ixmin + 1.0, 0.0) - ih = np.maximum(iymax - iymin + 1.0, 0.0) - inters = iw * ih - - # union - uni = ( - (bb[2] - bb[0] + 1.0) * (bb[3] - bb[1] + 1.0) - + (BBGT[:, 2] - BBGT[:, 0] + 1.0) * (BBGT[:, 3] - BBGT[:, 1] + 1.0) - - inters - ) - - overlaps = inters / uni - ovmax = np.max(overlaps) - jmax = np.argmax(overlaps) - - if ovmax > ovthresh: - if not R["difficult"][jmax]: - if not R["det"][jmax]: - tp[d] = 1.0 - R["det"][jmax] = 1 - else: - fp[d] = 1.0 - else: - fp[d] = 1.0 - - # compute precision recall - fp = np.cumsum(fp) - tp = np.cumsum(tp) - rec = tp / float(npos) - # avoid divide by zero in case the first detection matches a difficult - # ground truth - prec = tp / np.maximum(tp + fp, np.finfo(np.float64).eps) - ap = voc_ap(rec, prec, use_07_metric) - - return rec, prec, ap diff --git a/spaces/Carterclear/swarm-agents/README.md b/spaces/Carterclear/swarm-agents/README.md deleted file mode 100644 index 2604e833fd0020b6f4e4178c858a4276b92669e1..0000000000000000000000000000000000000000 --- a/spaces/Carterclear/swarm-agents/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Swarm Agents -emoji: 👁 -colorFrom: red -colorTo: purple -sdk: gradio -sdk_version: 3.28.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: swarm-agents/swarm-agents ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Celestinian/Nora-Inference/README.md b/spaces/Celestinian/Nora-Inference/README.md deleted file mode 100644 index 6912a29c100691d49436a5d731dad0628bf12e6b..0000000000000000000000000000000000000000 --- a/spaces/Celestinian/Nora-Inference/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Nora Inference -emoji: 👁 -colorFrom: pink -colorTo: pink -sdk: gradio -sdk_version: 3.28.3 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Chukwuka/Dog_Breed_ImageWoof/utils.py b/spaces/Chukwuka/Dog_Breed_ImageWoof/utils.py deleted file mode 100644 index c8304c600a86e3497dc439ad16bf1cb46ff1c962..0000000000000000000000000000000000000000 --- a/spaces/Chukwuka/Dog_Breed_ImageWoof/utils.py +++ /dev/null @@ -1,120 +0,0 @@ -import torch -import matplotlib as plt -from torch.utils.data import DataLoader, TensorDataset - -def show_example(img,label): - print('Label: ', classes[label], '('+str(label)+')') - plt.imshow(img.permute(1, 2, 0)) - - -def denormalize(images, means, stds): - means = torch.tensor(means).reshape(1, 3, 1, 1) - stds = torch.tensor(stds).reshape(1, 3, 1, 1) - return images * stds + means - - -def accuracy(out,labels): - _, preds = torch.max(out,dim=1) - total = torch.sum(preds == labels).item()/len(preds) - return torch.tensor(total) - -@torch.inference_mode() -def evaluation(model,val_loader): - model.eval() - results = [model.validation_step(batch) for batch in val_loader] - outputs = model.validation_end_epoch(results) - return outputs - - -def to_device(data, device): - if isinstance(data, (tuple, list)): - return [to_device(x, device) for x in data] - return data.to(device, non_blocking=True) - -class DeviceDataLoader(DataLoader): - def __init__(self, dl, device): - self.dl = dl - self.device = device - - def __iter__(self): - """Yield a batch of data after moving it to device""" - for x in self.dl: - yield to_device(x, self.device) - - def __len__(self): - """Number of batches""" - return len(self.dl) - - -def get_lr(optimizer): - for param_group in optimizer.param_groups: - return param_group['lr'] - -def fit_one_cycle(epochs, max_lr, model, train_loader, val_loader, - weight_decay=0, grad_clip=None, opt_func=torch.optim.SGD): - torch.cuda.empty_cache() - history = [] - - # Set up cutom optimizer with weight decay - optimizer = opt_func(model.parameters(), max_lr, weight_decay=weight_decay) - - # Set up one-cycle learning rate scheduler - sched = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr, epochs=epochs, - steps_per_epoch=len(train_loader)) - - for epoch in range(epochs): - # Training Phase - model.train() - train_losses = [] - train_acc = [] - lrs = [] - for batch in train_loader: - loss, acc = model.training_step(batch) - train_losses.append(loss) - train_acc.append(acc) - loss.backward() - - # Gradient clipping - if grad_clip: - nn.utils.clip_grad_value_(model.parameters(), grad_clip) - - optimizer.step() - optimizer.zero_grad() - - # Record & update learning rate - lrs.append(get_lr(optimizer)) - sched.step() - - # Validation phase - result = evaluation(model, val_loader) - result['train_losses'] = torch.stack(train_losses).mean().item() - result['train_acc'] = torch.stack(train_acc).mean().item() - result['lrs'] = lrs - model.epoch_end(epoch, result) - history.append(result) - return history - - -def plot_accuracies(history): - plt.plot([x['val_acc'] for x in history], '-rx') - plt.plot([x['train_acc'] for x in history[1:]], '-bx') - plt.xlabel('epoch') - plt.ylabel('accuracy') - plt.legend(['Validation', 'Training']) - plt.title('Accuracy vs. No. of epochs'); - -def plot_losses(history): - plt.plot([x['val_loss'] for x in history], '-rx') - plt.plot([x['train_losses'] for x in history[1:]], '-bx') - plt.xlabel('epoch') - plt.ylabel('loss') - plt.legend(['Validation', 'Training']) - plt.title('Loss vs. No. of epochs'); - - -def plot_lrs(history): - lrs = np.concatenate([x.get('lrs', []) for x in history]) - plt.plot(lrs) - plt.xlabel('Batch no.') - plt.ylabel('Learning rate') - plt.title('Learning Rate vs. Batch no.'); diff --git a/spaces/Clementapa/orang-outan-image-video-detection/app.py b/spaces/Clementapa/orang-outan-image-video-detection/app.py deleted file mode 100644 index edb78fd4f78f5003feb70d7265305d4570a5f821..0000000000000000000000000000000000000000 --- a/spaces/Clementapa/orang-outan-image-video-detection/app.py +++ /dev/null @@ -1,230 +0,0 @@ -import os -import os.path as osp -from typing import List - -import cv2 -import gradio as gr -import numpy as np -import supervision as sv -import torch -from PIL import Image -from supervision import Color -from ultralytics import YOLO - -MARKDOWN = """ -

      WildGuardian: AI for Orangutan Ecosystem Surveillance 🦧🔍

      - -## About the model 👁️ -This is a demo for my YOLOv8 nano trained for orangutan detection.\\ -The model was trained using only ~1000 images of orangutan [this dataset](https://images.cv/dataset/orangutan-image-classification-dataset) and [this dataset](https://www.kaggle.com/datasets/slothkong/10-monkey-species/data) containing ~1000 images used as background images.\\ -Annotations were obtained using zero shot object detection method GroundingDino.\ - -The full pipeline can be found on my github repository: https://github.com/clementapa/orangutan-image-video-detection. - -## About the orangutans 🦧 -Because to habitat destruction, illicit poaching, and the pet trade, orangutans are in danger of going extinct. Their natural habitat has been significantly reduced by deforestation and the growth of palm oil plantations. Adult orangutans are occasionally sought for their body parts, and they are frequently captured and sold as pets. Climate change and disease are also taking a toll on their populations. Furthermore, it is concerning to note that they are limited to Borneo and Sumatra, two places on Earth. Sustainable practises and conservation initiatives are crucial to preventing the permanent extinction of these amazing animals. - -## AI for good 🌍 -Artificial Intelligence (AI) has unquestionable power in the realm of innovation and technology. Even though artificial intelligence (AI) has frequently been used for commercial advantage, it is important to stress that AI can also be used for more noble purposes, such as protecting the environment and the planet's future. We can build a more promising and sustainable future if we reorient AI's focus from business to improving our planet. -""" - -EXAMPLES = [] - -DEVICE = "cuda" if torch.cuda.is_available() else "cpu" - -YOLO_MODEL = YOLO("train_7best.pt") - -BOX_ANNOTATOR = sv.BoxAnnotator(color=Color.from_hex("#FF00E4")) - - -def annotate( - image_bgr_numpy: Image.Image, - detections: sv.Detections, - annotator: sv.BoxAnnotator, - labels: str, -) -> Image.Image: - thickness = 2 - text_thickness = 1 - text_scale = 1.0 - - height, width, _ = image_bgr_numpy.shape - - thickness_ratio = ((width + height) / 2) / 400 - text_scale_ratio = ((width + height) / 2) / 600 - text_thickness_ratio = ((width + height) / 2) / 400 - - annotator.thickness = int(thickness * thickness_ratio) - annotator.text_scale = float(text_scale * text_scale_ratio) - annotator.text_thickness = int(text_thickness * text_thickness_ratio) - - annotated_bgr_image = annotator.annotate( - scene=image_bgr_numpy, detections=detections, labels=labels - ) - return Image.fromarray(annotated_bgr_image[:, :, ::-1]) - - -def inference_image(image_rgb_pil: Image.Image, confidence: float) -> List[Image.Image]: - output = YOLO_MODEL(image_rgb_pil, imgsz=640, verbose=False)[0] - detections = sv.Detections.from_ultralytics(output) - - detections = detections[detections.confidence >= confidence] - - labels = [ - f"{output.names[class_id]} {confidence:0.2f}" - for _, _, confidence, class_id, _ in detections - ] - - return annotate( - image_bgr_numpy=output.orig_img.copy(), - detections=detections, - annotator=BOX_ANNOTATOR, - labels=labels, - ) - - -def process_frame(frame: np.ndarray, confidence: float) -> np.ndarray: - output = YOLO_MODEL(frame, imgsz=640, verbose=False)[0] - - detections = sv.Detections.from_ultralytics(output) - - detections = detections[detections.confidence >= confidence] - - labels = [ - f"{output.names[class_id]} {confidence:0.2f}" - for _, _, confidence, class_id, _ in detections - ] - - thickness = 2 - text_thickness = 1 - text_scale = 1.0 - - height, width, _ = output.orig_img.shape - - thickness_ratio = ((width + height) / 2) / 400 - text_scale_ratio = ((width + height) / 2) / 600 - text_thickness_ratio = ((width + height) / 2) / 400 - - BOX_ANNOTATOR.thickness = int(thickness * thickness_ratio) - BOX_ANNOTATOR.text_scale = float(text_scale * text_scale_ratio) - BOX_ANNOTATOR.text_thickness = int(text_thickness * text_thickness_ratio) - - annotated_frame = BOX_ANNOTATOR.annotate( - scene=output.orig_img.copy(), detections=detections, labels=labels - ) - return annotated_frame - - -def inference_video(path_video, confidence): - path_output_video = "temp.mp4" - video_capture = cv2.VideoCapture(path_video) - - # Check if the video file was successfully opened - if not video_capture.isOpened(): - print("Error: Could not open video file.") - exit() - - frame_width = int(video_capture.get(3)) - frame_height = int(video_capture.get(4)) - frame_rate = int(video_capture.get(5)) - - fourcc = cv2.VideoWriter_fourcc(*"mp4v") # You can change the codec as needed - out = cv2.VideoWriter( - path_output_video, fourcc, frame_rate, (frame_width, frame_height) - ) - - while True: - # Read a frame from the video - ret, frame = video_capture.read() - - # Check if the video has ended - if not ret: - break - - # Do something with the frame (e.g., display it or process it) - # For example, you can display the frame in a window - annotated_frame = process_frame(frame, confidence=confidence) - - out.write(annotated_frame) - - # Release the video capture object and close any open windows - video_capture.release() - out.release() - cv2.destroyAllWindows() - - return path_output_video - - -custom_theme = gr.themes.Soft(primary_hue="green") -with gr.Blocks(theme=custom_theme, css="style.css") as demo: - gr.Markdown(MARKDOWN) - - with gr.Tab("Detect on an image 🖼️"): - with gr.Row(): - with gr.Column(): - input_image = gr.Image( - image_mode="RGB", - sources=["upload", "clipboard"], - type="pil", - ) - example_folder = osp.join( - osp.dirname(__file__), "resources/examples_images" - ) - example_fns = [ - osp.join(example_folder, example) - for example in os.listdir(example_folder) - ] - gr.Examples( - examples=example_fns, - inputs=[input_image], - outputs=[input_image], - cache_examples=False, - label="Examples (click one of the images below to start)", - examples_per_page=10, - ) - confidence_image_slider = gr.Slider( - label="Confidence", minimum=0.1, maximum=1.0, step=0.05, value=0.6 - ) - submit_button_image = gr.Button("Let's find orangutans 🦧 !") - output_image = gr.Image(label="Results", type="pil") - - with gr.Tab("Detect on a video 📹"): - with gr.Row(): - with gr.Column(): - input_video = gr.Video(sources=["upload"]) - example_folder = osp.join( - osp.dirname(__file__), "resources/examples_videos" - ) - example_fns = [ - osp.join(example_folder, example) - for example in os.listdir(example_folder) - ] - gr.Examples( - examples=example_fns, - inputs=[input_video], - outputs=[input_video], - cache_examples=False, - label="Examples (click one of the videos below to start)", - examples_per_page=10, - ) - confidence_video_slider = gr.Slider( - label="Confidence", minimum=0.1, maximum=1.0, step=0.05, value=0.6 - ) - submit_button_video = gr.Button("Let's find orangutans 🦧 !") - output_video = gr.Video(label="Results") - - submit_button_image.click( - inference_image, - inputs=[input_image, confidence_image_slider], - outputs=output_image, - queue=True, - ) - - submit_button_video.click( - inference_video, - inputs=[input_video, confidence_video_slider], - outputs=output_video, - queue=True, - ) - -if __name__ == "__main__": - demo.queue(max_size=20, api_open=False).launch() diff --git a/spaces/CofAI/chat.b4/client/css/hljs.css b/spaces/CofAI/chat.b4/client/css/hljs.css deleted file mode 100644 index 1fcf16ba358a7c5d287b1c6e33c3afbfff38f623..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat.b4/client/css/hljs.css +++ /dev/null @@ -1,68 +0,0 @@ -.hljs { - color: #e9e9f4; - background: #28293629; - border-radius: var(--border-radius-1); - border: 1px solid var(--blur-border); - font-size: 15px; - word-wrap: break-word; - white-space: pre-wrap; -} - -/* style for hljs copy */ -.hljs-copy-wrapper { - position: relative; - overflow: hidden; -} - -.hljs-copy-wrapper:hover .hljs-copy-button, -.hljs-copy-button:focus { - transform: translateX(0); -} - -.hljs-copy-button { - position: absolute; - transform: translateX(calc(100% + 1.125em)); - top: 1em; - right: 1em; - width: 2rem; - height: 2rem; - text-indent: -9999px; - color: #fff; - border-radius: 0.25rem; - border: 1px solid #ffffff22; - background-color: #2d2b57; - background-image: url('data:image/svg+xml;utf-8,'); - background-repeat: no-repeat; - background-position: center; - transition: background-color 200ms ease, transform 200ms ease-out; -} - -.hljs-copy-button:hover { - border-color: #ffffff44; -} - -.hljs-copy-button:active { - border-color: #ffffff66; -} - -.hljs-copy-button[data-copied="true"] { - text-indent: 0; - width: auto; - background-image: none; -} - -.hljs-copy-alert { - clip: rect(0 0 0 0); - clip-path: inset(50%); - height: 1px; - overflow: hidden; - position: absolute; - white-space: nowrap; - width: 1px; -} - -@media (prefers-reduced-motion) { - .hljs-copy-button { - transition: none; - } -} diff --git a/spaces/CofAI/chat/g4f/Provider/Providers/Bard.py b/spaces/CofAI/chat/g4f/Provider/Providers/Bard.py deleted file mode 100644 index 4c37c4b719430031fce41ce49946f0e6ac93d155..0000000000000000000000000000000000000000 --- a/spaces/CofAI/chat/g4f/Provider/Providers/Bard.py +++ /dev/null @@ -1,74 +0,0 @@ -import os, requests, json, browser_cookie3, re, random -from ...typing import sha256, Dict, get_type_hints - -url = 'https://bard.google.com' -model = ['Palm2'] -supports_stream = False -needs_auth = True - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - psid = {cookie.name: cookie.value for cookie in browser_cookie3.chrome( - domain_name='.google.com')}['__Secure-1PSID'] - - formatted = '\n'.join([ - '%s: %s' % (message['role'], message['content']) for message in messages - ]) - prompt = f'{formatted}\nAssistant:' - - proxy = kwargs.get('proxy', False) - if proxy == False: - print('warning!, you did not give a proxy, a lot of countries are banned from Google Bard, so it may not work') - - snlm0e = None - conversation_id = None - response_id = None - choice_id = None - - client = requests.Session() - client.proxies = { - 'http': f'http://{proxy}', - 'https': f'http://{proxy}'} if proxy else None - - client.headers = { - 'authority': 'bard.google.com', - 'content-type': 'application/x-www-form-urlencoded;charset=UTF-8', - 'origin': 'https://bard.google.com', - 'referer': 'https://bard.google.com/', - 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36', - 'x-same-domain': '1', - 'cookie': f'__Secure-1PSID={psid}' - } - - snlm0e = re.search(r'SNlM0e\":\"(.*?)\"', - client.get('https://bard.google.com/').text).group(1) if not snlm0e else snlm0e - - params = { - 'bl': 'boq_assistant-bard-web-server_20230326.21_p0', - '_reqid': random.randint(1111, 9999), - 'rt': 'c' - } - - data = { - 'at': snlm0e, - 'f.req': json.dumps([None, json.dumps([[prompt], None, [conversation_id, response_id, choice_id]])])} - - intents = '.'.join([ - 'assistant', - 'lamda', - 'BardFrontendService' - ]) - - response = client.post(f'https://bard.google.com/_/BardChatUi/data/{intents}/StreamGenerate', - data=data, params=params) - - chat_data = json.loads(response.content.splitlines()[3])[0][2] - if chat_data: - json_chat_data = json.loads(chat_data) - - yield json_chat_data[0][0] - - else: - yield 'error' - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/event.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/event.py deleted file mode 100644 index 5612e818f66fa1fa633b8995b97701700c560b62..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/event.py +++ /dev/null @@ -1,12 +0,0 @@ -import cv2 -import logging -def wait_key(target = None): - key = cv2.waitKey()& 0xFF - if target == None: - return key - if type(target) == str: - target = ord(target) - while key != target: - key = cv2.waitKey()& 0xFF - - logging.debug('Key Pression caught:%s'%(target)) diff --git a/spaces/DDD2222/webui/README.md b/spaces/DDD2222/webui/README.md deleted file mode 100644 index 013d12c9f3a56698056ae1bdbbfb0ec009805237..0000000000000000000000000000000000000000 --- a/spaces/DDD2222/webui/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: Stable Diffusion Web UI -emoji: 🚧 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.9 -app_file: app.py -pinned: false -duplicated_from: camenduru/webui ---- - -## Stable Diffusion Web UI -[https://github.com/AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) - -## Documentation -[https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki) - -## Models License -https://huggingface.co/spaces/CompVis/stable-diffusion-license \ No newline at end of file diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/checkbox.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/checkbox.py deleted file mode 100644 index 08b6478f4b1809cd3de77a174e4bd9534912f090..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/checkbox.py +++ /dev/null @@ -1,134 +0,0 @@ -"""gr.Checkbox() component.""" - -from __future__ import annotations - -from typing import Callable, Literal - -from gradio_client.documentation import document, set_documentation_group -from gradio_client.serializing import BooleanSerializable - -from gradio.components.base import FormComponent, IOComponent, _Keywords -from gradio.events import Changeable, EventListenerMethod, Inputable, Selectable -from gradio.interpretation import NeighborInterpretable - -set_documentation_group("component") - - -@document() -class Checkbox( - FormComponent, - Changeable, - Inputable, - Selectable, - IOComponent, - BooleanSerializable, - NeighborInterpretable, -): - """ - Creates a checkbox that can be set to `True` or `False`. - - Preprocessing: passes the status of the checkbox as a {bool} into the function. - Postprocessing: expects a {bool} returned from the function and, if it is True, checks the checkbox. - Examples-format: a {bool} representing whether the box is checked. - Demos: sentence_builder, titanic_survival - """ - - def __init__( - self, - value: bool | Callable = False, - *, - label: str | None = None, - info: str | None = None, - every: float | None = None, - show_label: bool | None = None, - container: bool = True, - scale: int | None = None, - min_width: int = 160, - interactive: bool | None = None, - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - **kwargs, - ): - """ - Parameters: - value: if True, checked by default. If callable, the function will be called whenever the app loads to set the initial value of the component. - label: component name in interface. - info: additional component description. - every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute. - show_label: if True, will display label. - container: If True, will place the component in a container - providing some extra padding around the border. - scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer. - min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first. - interactive: if True, this checkbox can be checked; if False, checking will be disabled. If not provided, this is inferred based on whether the component is used as an input or output. - visible: If False, component will be hidden. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - """ - self.select: EventListenerMethod - """ - Event listener for when the user selects or deselects Checkbox. - Uses event data gradio.SelectData to carry `value` referring to label of checkbox, and `selected` to refer to state of checkbox. - See EventData documentation on how to use this event data. - """ - IOComponent.__init__( - self, - label=label, - info=info, - every=every, - show_label=show_label, - container=container, - scale=scale, - min_width=min_width, - interactive=interactive, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - value=value, - **kwargs, - ) - NeighborInterpretable.__init__(self) - - def get_config(self): - return { - "value": self.value, - **IOComponent.get_config(self), - } - - @staticmethod - def update( - value: bool | Literal[_Keywords.NO_VALUE] | None = _Keywords.NO_VALUE, - label: str | None = None, - info: str | None = None, - show_label: bool | None = None, - container: bool | None = None, - scale: int | None = None, - min_width: int | None = None, - interactive: bool | None = None, - visible: bool | None = None, - ): - return { - "label": label, - "info": info, - "show_label": show_label, - "container": container, - "scale": scale, - "min_width": min_width, - "interactive": interactive, - "visible": visible, - "value": value, - "__type__": "update", - } - - def get_interpretation_neighbors(self, x): - return [not x], {} - - def get_interpretation_scores(self, x, neighbors, scores, **kwargs): - """ - Returns: - The first value represents the interpretation score if the input is False, and the second if the input is True. - """ - if x: - return scores[0], None - else: - return None, scores[0] diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/_tensorboard_logger.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/_tensorboard_logger.py deleted file mode 100644 index 87c5e7a53cc6d966936f8bafa84e6c0b1ff476ee..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/_tensorboard_logger.py +++ /dev/null @@ -1,157 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Contains a logger to push training logs to the Hub, using Tensorboard.""" -from pathlib import Path -from typing import TYPE_CHECKING, List, Optional, Union - -from huggingface_hub._commit_scheduler import CommitScheduler - -from .utils import experimental, is_tensorboard_available - - -if is_tensorboard_available(): - from tensorboardX import SummaryWriter - - # TODO: clarify: should we import from torch.utils.tensorboard ? -else: - SummaryWriter = object # Dummy class to avoid failing at import. Will raise on instance creation. - -if TYPE_CHECKING: - from tensorboardX import SummaryWriter - - -class HFSummaryWriter(SummaryWriter): - """ - Wrapper around the tensorboard's `SummaryWriter` to push training logs to the Hub. - - Data is logged locally and then pushed to the Hub asynchronously. Pushing data to the Hub is done in a separate - thread to avoid blocking the training script. In particular, if the upload fails for any reason (e.g. a connection - issue), the main script will not be interrupted. Data is automatically pushed to the Hub every `commit_every` - minutes (default to every 5 minutes). - - - - `HFSummaryWriter` is experimental. Its API is subject to change in the future without prior notice. - - - - Args: - repo_id (`str`): - The id of the repo to which the logs will be pushed. - logdir (`str`, *optional*): - The directory where the logs will be written. If not specified, a local directory will be created by the - underlying `SummaryWriter` object. - commit_every (`int` or `float`, *optional*): - The frequency (in minutes) at which the logs will be pushed to the Hub. Defaults to 5 minutes. - repo_type (`str`, *optional*): - The type of the repo to which the logs will be pushed. Defaults to "model". - repo_revision (`str`, *optional*): - The revision of the repo to which the logs will be pushed. Defaults to "main". - repo_private (`bool`, *optional*): - Whether to create a private repo or not. Defaults to False. This argument is ignored if the repo already - exists. - path_in_repo (`str`, *optional*): - The path to the folder in the repo where the logs will be pushed. Defaults to "tensorboard/". - repo_allow_patterns (`List[str]` or `str`, *optional*): - A list of patterns to include in the upload. Defaults to `"*.tfevents.*"`. Check out the - [upload guide](https://huggingface.co/docs/huggingface_hub/guides/upload#upload-a-folder) for more details. - repo_ignore_patterns (`List[str]` or `str`, *optional*): - A list of patterns to exclude in the upload. Check out the - [upload guide](https://huggingface.co/docs/huggingface_hub/guides/upload#upload-a-folder) for more details. - token (`str`, *optional*): - Authentication token. Will default to the stored token. See https://huggingface.co/settings/token for more - details - kwargs: - Additional keyword arguments passed to `SummaryWriter`. - - Examples: - ```py - >>> from huggingface_hub import HFSummaryWriter - - # Logs are automatically pushed every 15 minutes - >>> logger = HFSummaryWriter(repo_id="test_hf_logger", commit_every=15) - >>> logger.add_scalar("a", 1) - >>> logger.add_scalar("b", 2) - ... - - # You can also trigger a push manually - >>> logger.scheduler.trigger() - ``` - - ```py - >>> from huggingface_hub import HFSummaryWriter - - # Logs are automatically pushed every 5 minutes (default) + when exiting the context manager - >>> with HFSummaryWriter(repo_id="test_hf_logger") as logger: - ... logger.add_scalar("a", 1) - ... logger.add_scalar("b", 2) - ``` - """ - - @experimental - def __new__(cls, *args, **kwargs) -> "HFSummaryWriter": - if not is_tensorboard_available(): - raise ImportError( - "You must have `tensorboard` installed to use `HFSummaryWriter`. Please run `pip install --upgrade" - " tensorboardX` first." - ) - return super().__new__(cls) - - def __init__( - self, - repo_id: str, - *, - logdir: Optional[str] = None, - commit_every: Union[int, float] = 5, - repo_type: Optional[str] = None, - repo_revision: Optional[str] = None, - repo_private: bool = False, - path_in_repo: Optional[str] = "tensorboard", - repo_allow_patterns: Optional[Union[List[str], str]] = "*.tfevents.*", - repo_ignore_patterns: Optional[Union[List[str], str]] = None, - token: Optional[str] = None, - **kwargs, - ): - # Initialize SummaryWriter - super().__init__(logdir=logdir, **kwargs) - - # Check logdir has been correctly initialized and fail early otherwise. In practice, SummaryWriter takes care of it. - if not isinstance(self.logdir, str): - raise ValueError(f"`self.logdir` must be a string. Got '{self.logdir}' of type {type(self.logdir)}.") - - # Append logdir name to `path_in_repo` - if path_in_repo is None or path_in_repo == "": - path_in_repo = Path(self.logdir).name - else: - path_in_repo = path_in_repo.strip("/") + "/" + Path(self.logdir).name - - # Initialize scheduler - self.scheduler = CommitScheduler( - folder_path=self.logdir, - path_in_repo=path_in_repo, - repo_id=repo_id, - repo_type=repo_type, - revision=repo_revision, - private=repo_private, - token=token, - allow_patterns=repo_allow_patterns, - ignore_patterns=repo_ignore_patterns, - every=commit_every, - ) - - def __exit__(self, exc_type, exc_val, exc_tb): - """Push to hub in a non-blocking way when exiting the logger's context manager.""" - super().__exit__(exc_type, exc_val, exc_tb) - future = self.scheduler.trigger() - future.result() diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/commands/lfs.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/commands/lfs.py deleted file mode 100644 index 77a38d8df0a364ac3472011c127661f157c973a2..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/commands/lfs.py +++ /dev/null @@ -1,202 +0,0 @@ -""" -Implementation of a custom transfer agent for the transfer type "multipart" for -git-lfs. - -Inspired by: -github.com/cbartz/git-lfs-swift-transfer-agent/blob/master/git_lfs_swift_transfer.py - -Spec is: github.com/git-lfs/git-lfs/blob/master/docs/custom-transfers.md - - -To launch debugger while developing: - -``` [lfs "customtransfer.multipart"] -path = /path/to/huggingface_hub/.env/bin/python args = -m debugpy --listen 5678 ---wait-for-client -/path/to/huggingface_hub/src/huggingface_hub/commands/huggingface_cli.py -lfs-multipart-upload ```""" - -import json -import os -import subprocess -import sys -from argparse import _SubParsersAction -from typing import Dict, List, Optional - -from huggingface_hub.commands import BaseHuggingfaceCLICommand -from huggingface_hub.lfs import LFS_MULTIPART_UPLOAD_COMMAND, SliceFileObj - -from ..utils import get_session, hf_raise_for_status, logging - - -logger = logging.get_logger(__name__) - - -class LfsCommands(BaseHuggingfaceCLICommand): - """ - Implementation of a custom transfer agent for the transfer type "multipart" - for git-lfs. This lets users upload large files >5GB 🔥. Spec for LFS custom - transfer agent is: - https://github.com/git-lfs/git-lfs/blob/master/docs/custom-transfers.md - - This introduces two commands to the CLI: - - 1. $ huggingface-cli lfs-enable-largefiles - - This should be executed once for each model repo that contains a model file - >5GB. It's documented in the error message you get if you just try to git - push a 5GB file without having enabled it before. - - 2. $ huggingface-cli lfs-multipart-upload - - This command is called by lfs directly and is not meant to be called by the - user. - """ - - @staticmethod - def register_subcommand(parser: _SubParsersAction): - enable_parser = parser.add_parser( - "lfs-enable-largefiles", - help="Configure your repository to enable upload of files > 5GB.", - ) - enable_parser.add_argument("path", type=str, help="Local path to repository you want to configure.") - enable_parser.set_defaults(func=lambda args: LfsEnableCommand(args)) - - upload_parser = parser.add_parser( - LFS_MULTIPART_UPLOAD_COMMAND, - help="Command will get called by git-lfs, do not call it directly.", - ) - upload_parser.set_defaults(func=lambda args: LfsUploadCommand(args)) - - -class LfsEnableCommand: - def __init__(self, args): - self.args = args - - def run(self): - local_path = os.path.abspath(self.args.path) - if not os.path.isdir(local_path): - print("This does not look like a valid git repo.") - exit(1) - subprocess.run( - "git config lfs.customtransfer.multipart.path huggingface-cli".split(), - check=True, - cwd=local_path, - ) - subprocess.run( - f"git config lfs.customtransfer.multipart.args {LFS_MULTIPART_UPLOAD_COMMAND}".split(), - check=True, - cwd=local_path, - ) - print("Local repo set up for largefiles") - - -def write_msg(msg: Dict): - """Write out the message in Line delimited JSON.""" - msg_str = json.dumps(msg) + "\n" - sys.stdout.write(msg_str) - sys.stdout.flush() - - -def read_msg() -> Optional[Dict]: - """Read Line delimited JSON from stdin.""" - msg = json.loads(sys.stdin.readline().strip()) - - if "terminate" in (msg.get("type"), msg.get("event")): - # terminate message received - return None - - if msg.get("event") not in ("download", "upload"): - logger.critical("Received unexpected message") - sys.exit(1) - - return msg - - -class LfsUploadCommand: - def __init__(self, args): - self.args = args - - def run(self): - # Immediately after invoking a custom transfer process, git-lfs - # sends initiation data to the process over stdin. - # This tells the process useful information about the configuration. - init_msg = json.loads(sys.stdin.readline().strip()) - if not (init_msg.get("event") == "init" and init_msg.get("operation") == "upload"): - write_msg({"error": {"code": 32, "message": "Wrong lfs init operation"}}) - sys.exit(1) - - # The transfer process should use the information it needs from the - # initiation structure, and also perform any one-off setup tasks it - # needs to do. It should then respond on stdout with a simple empty - # confirmation structure, as follows: - write_msg({}) - - # After the initiation exchange, git-lfs will send any number of - # transfer requests to the stdin of the transfer process, in a serial sequence. - while True: - msg = read_msg() - if msg is None: - # When all transfers have been processed, git-lfs will send - # a terminate event to the stdin of the transfer process. - # On receiving this message the transfer process should - # clean up and terminate. No response is expected. - sys.exit(0) - - oid = msg["oid"] - filepath = msg["path"] - completion_url = msg["action"]["href"] - header = msg["action"]["header"] - chunk_size = int(header.pop("chunk_size")) - presigned_urls: List[str] = list(header.values()) - - # Send a "started" progress event to allow other workers to start. - # Otherwise they're delayed until first "progress" event is reported, - # i.e. after the first 5GB by default (!) - write_msg( - { - "event": "progress", - "oid": oid, - "bytesSoFar": 1, - "bytesSinceLast": 0, - } - ) - - parts = [] - with open(filepath, "rb") as file: - for i, presigned_url in enumerate(presigned_urls): - with SliceFileObj( - file, - seek_from=i * chunk_size, - read_limit=chunk_size, - ) as data: - r = get_session().put(presigned_url, data=data) - hf_raise_for_status(r) - parts.append( - { - "etag": r.headers.get("etag"), - "partNumber": i + 1, - } - ) - # In order to support progress reporting while data is uploading / downloading, - # the transfer process should post messages to stdout - write_msg( - { - "event": "progress", - "oid": oid, - "bytesSoFar": (i + 1) * chunk_size, - "bytesSinceLast": chunk_size, - } - ) - # Not precise but that's ok. - - r = get_session().post( - completion_url, - json={ - "oid": oid, - "parts": parts, - }, - ) - hf_raise_for_status(r) - - write_msg({"event": "complete", "oid": oid}) diff --git a/spaces/DaleChen/AutoGPT/main.py b/spaces/DaleChen/AutoGPT/main.py deleted file mode 100644 index 160addc390b94a8b143a3a2e18991a560f9b032e..0000000000000000000000000000000000000000 --- a/spaces/DaleChen/AutoGPT/main.py +++ /dev/null @@ -1 +0,0 @@ -from autogpt import main diff --git a/spaces/Dao3/Text-To-image-AllModels/app.py b/spaces/Dao3/Text-To-image-AllModels/app.py deleted file mode 100644 index 4c9be36af58c47d288dae6872445bec71e3afa04..0000000000000000000000000000000000000000 --- a/spaces/Dao3/Text-To-image-AllModels/app.py +++ /dev/null @@ -1,44 +0,0 @@ -from diffusers import StableDiffusionPipeline -import torch - -modelieo=['nitrosocke/Arcane-Diffusion', - 'dreamlike-art/dreamlike-diffusion-1.0', - 'nitrosocke/archer-diffusion', - 'Linaqruf/anything-v3.0', - 'nitrosocke/mo-di-diffusion', - 'nitrosocke/classic-anim-diffusion', - 'dallinmackay/Van-Gogh-diffusion', - 'wavymulder/wavyfusion', - 'wavymulder/Analog-Diffusion', - 'nitrosocke/redshift-diffusion', - 'prompthero/midjourney-v4-diffusion', - 'hakurei/waifu-diffusion', - 'DGSpitzer/Cyberpunk-Anime-Diffusion', - 'nitrosocke/elden-ring-diffusion', - 'naclbit/trinart_stable_diffusion_v2', - 'nitrosocke/spider-verse-diffusion', - 'Fictiverse/Stable_Diffusion_BalloonArt_Model', - 'dallinmackay/Tron-Legacy-diffusion', - 'lambdalabs/sd-pokemon-diffusers', - 'AstraliteHeart/pony-diffusion', - 'nousr/robo-diffusion'] - - -def TextToImage(Prompt,model): - model_id = model - pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) - pipe = pipe.to("cpu") - - prompt = Prompt - image = pipe(prompt).images[0] - - return image - - -import gradio as gr -interface = gr.Interface(fn=TextToImage, - inputs=["text", gr.Dropdown(modelieo)], - outputs="image", - title='Text to Image') - -interface.launch() \ No newline at end of file diff --git a/spaces/Datasculptor/DescriptionGPT/tools/preprocess_imagenet22k.py b/spaces/Datasculptor/DescriptionGPT/tools/preprocess_imagenet22k.py deleted file mode 100644 index 6dda56c222a30c7be23fafbdab4be3fe611597e2..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/DescriptionGPT/tools/preprocess_imagenet22k.py +++ /dev/null @@ -1,148 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. - -import os -import numpy as np -import sys - -sys.path.insert(0, 'third_party/CenterNet2/projects/CenterNet2/') -sys.path.insert(0, 'third_party/Deformable-DETR') -from detic.data.tar_dataset import _TarDataset, DiskTarDataset -import pickle -import io -import gzip -import time - - -class _RawTarDataset(object): - - def __init__(self, filename, indexname, preload=False): - self.filename = filename - self.names = [] - self.offsets = [] - - for l in open(indexname): - ll = l.split() - a, b, c = ll[:3] - offset = int(b[:-1]) - if l.endswith('** Block of NULs **\n'): - self.offsets.append(offset) - break - else: - if c.endswith('JPEG'): - self.names.append(c) - self.offsets.append(offset) - else: - # ignore directories - pass - if preload: - self.data = np.memmap(filename, mode='r', dtype='uint8') - else: - self.data = None - - def __len__(self): - return len(self.names) - - def __getitem__(self, idx): - if self.data is None: - self.data = np.memmap(self.filename, mode='r', dtype='uint8') - ofs = self.offsets[idx] * 512 - fsize = 512 * (self.offsets[idx + 1] - self.offsets[idx]) - data = self.data[ofs:ofs + fsize] - - if data[:13].tostring() == '././@LongLink': - data = data[3 * 512:] - else: - data = data[512:] - - # just to make it more fun a few JPEGs are GZIP compressed... - # catch this case - if tuple(data[:2]) == (0x1f, 0x8b): - s = io.StringIO(data.tostring()) - g = gzip.GzipFile(None, 'r', 0, s) - sdata = g.read() - else: - sdata = data.tostring() - return sdata - - - -def preprocess(): - # Follow https://github.com/Alibaba-MIIL/ImageNet21K/blob/main/dataset_preprocessing/processing_script.sh - # Expect 12358684 samples with 11221 classes - # ImageNet folder has 21841 classes (synsets) - - i22kdir = '/datasets01/imagenet-22k/062717/' - i22ktarlogs = '/checkpoint/imisra/datasets/imagenet-22k/tarindex' - class_names_file = '/checkpoint/imisra/datasets/imagenet-22k/words.txt' - - output_dir = '/checkpoint/zhouxy/Datasets/ImageNet/metadata-22k/' - i22knpytarlogs = '/checkpoint/zhouxy/Datasets/ImageNet/metadata-22k/tarindex_npy' - print('Listing dir') - log_files = os.listdir(i22ktarlogs) - log_files = [x for x in log_files if x.endswith(".tarlog")] - log_files.sort() - chunk_datasets = [] - dataset_lens = [] - min_count = 0 - create_npy_tarlogs = True - print('Creating folders') - if create_npy_tarlogs: - os.makedirs(i22knpytarlogs, exist_ok=True) - for log_file in log_files: - syn = log_file.replace(".tarlog", "") - dataset = _RawTarDataset(os.path.join(i22kdir, syn + ".tar"), - os.path.join(i22ktarlogs, syn + ".tarlog"), - preload=False) - names = np.array(dataset.names) - offsets = np.array(dataset.offsets, dtype=np.int64) - np.save(os.path.join(i22knpytarlogs, f"{syn}_names.npy"), names) - np.save(os.path.join(i22knpytarlogs, f"{syn}_offsets.npy"), offsets) - - os.makedirs(output_dir, exist_ok=True) - - start_time = time.time() - for log_file in log_files: - syn = log_file.replace(".tarlog", "") - dataset = _TarDataset(os.path.join(i22kdir, syn + ".tar"), i22knpytarlogs) - # dataset = _RawTarDataset(os.path.join(i22kdir, syn + ".tar"), - # os.path.join(i22ktarlogs, syn + ".tarlog"), - # preload=False) - dataset_lens.append(len(dataset)) - end_time = time.time() - print(f"Time {end_time - start_time}") - - - dataset_lens = np.array(dataset_lens) - dataset_valid = dataset_lens > min_count - - syn2class = {} - with open(class_names_file) as fh: - for line in fh: - line = line.strip().split("\t") - syn2class[line[0]] = line[1] - - tarlog_files = [] - class_names = [] - tar_files = [] - for k in range(len(dataset_valid)): - if not dataset_valid[k]: - continue - syn = log_files[k].replace(".tarlog", "") - tarlog_files.append(os.path.join(i22ktarlogs, syn + ".tarlog")) - tar_files.append(os.path.join(i22kdir, syn + ".tar")) - class_names.append(syn2class[syn]) - - tarlog_files = np.array(tarlog_files) - tar_files = np.array(tar_files) - class_names = np.array(class_names) - print(f"Have {len(class_names)} classes and {dataset_lens[dataset_valid].sum()} samples") - - np.save(os.path.join(output_dir, "tarlog_files.npy"), tarlog_files) - np.save(os.path.join(output_dir, "tar_files.npy"), tar_files) - np.save(os.path.join(output_dir, "class_names.npy"), class_names) - np.save(os.path.join(output_dir, "tar_files.npy"), tar_files) - - -if __name__ == "__main__": - preprocess() diff --git a/spaces/Dipl0/Dipl0-pepe-diffuser-bot/README.md b/spaces/Dipl0/Dipl0-pepe-diffuser-bot/README.md deleted file mode 100644 index 1c004af580250d0c3261641425c3a7682239a15c..0000000000000000000000000000000000000000 --- a/spaces/Dipl0/Dipl0-pepe-diffuser-bot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Dipl0 Pepe Diffuser Bot -emoji: 🐨 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DragGan/DragGan-Inversion/PTI/models/e4e/encoders/model_irse.py b/spaces/DragGan/DragGan-Inversion/PTI/models/e4e/encoders/model_irse.py deleted file mode 100644 index 976ce2c61104efdc6b0015d895830346dd01bc10..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/PTI/models/e4e/encoders/model_irse.py +++ /dev/null @@ -1,84 +0,0 @@ -from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, Dropout, Sequential, Module -from encoder4editing.models.encoders.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE, l2_norm - -""" -Modified Backbone implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch) -""" - - -class Backbone(Module): - def __init__(self, input_size, num_layers, mode='ir', drop_ratio=0.4, affine=True): - super(Backbone, self).__init__() - assert input_size in [112, 224], "input_size should be 112 or 224" - assert num_layers in [50, 100, 152], "num_layers should be 50, 100 or 152" - assert mode in ['ir', 'ir_se'], "mode should be ir or ir_se" - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - if input_size == 112: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 7 * 7, 512), - BatchNorm1d(512, affine=affine)) - else: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 14 * 14, 512), - BatchNorm1d(512, affine=affine)) - - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - def forward(self, x): - x = self.input_layer(x) - x = self.body(x) - x = self.output_layer(x) - return l2_norm(x) - - -def IR_50(input_size): - """Constructs a ir-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_101(input_size): - """Constructs a ir-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_152(input_size): - """Constructs a ir-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_50(input_size): - """Constructs a ir_se-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir_se', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_101(input_size): - """Constructs a ir_se-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir_se', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_152(input_size): - """Constructs a ir_se-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir_se', drop_ratio=0.4, affine=False) - return model diff --git a/spaces/Duskfallcrew/Free-Illustration-Mix/README.md b/spaces/Duskfallcrew/Free-Illustration-Mix/README.md deleted file mode 100644 index 15b1b8d2d7d4940d57f061d52bcb277dd617cc78..0000000000000000000000000000000000000000 --- a/spaces/Duskfallcrew/Free-Illustration-Mix/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Free Illustration Mix -emoji: 👁 -colorFrom: indigo -colorTo: gray -sdk: gradio -sdk_version: 3.28.0 -app_file: app.py -pinned: false -license: creativeml-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/EDGAhab/VITS-Aatrox-AI/models.py b/spaces/EDGAhab/VITS-Aatrox-AI/models.py deleted file mode 100644 index f5acdeb2bedd47897348407c0ae55c9a160da881..0000000000000000000000000000000000000000 --- a/spaces/EDGAhab/VITS-Aatrox-AI/models.py +++ /dev/null @@ -1,534 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - diff --git a/spaces/EPFL-VILAB/MultiMAE/mask2former/maskformer_model.py b/spaces/EPFL-VILAB/MultiMAE/mask2former/maskformer_model.py deleted file mode 100644 index 88ce76d37adc678ed8c9c7df17271120c75512d3..0000000000000000000000000000000000000000 --- a/spaces/EPFL-VILAB/MultiMAE/mask2former/maskformer_model.py +++ /dev/null @@ -1,380 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from typing import Tuple - -import torch -from torch import nn -from torch.nn import functional as F - -from detectron2.config import configurable -from detectron2.data import MetadataCatalog -from detectron2.modeling import META_ARCH_REGISTRY, build_backbone, build_sem_seg_head -from detectron2.modeling.backbone import Backbone -from detectron2.modeling.postprocessing import sem_seg_postprocess -from detectron2.structures import Boxes, ImageList, Instances, BitMasks -from detectron2.utils.memory import retry_if_cuda_oom - -from .modeling.criterion import SetCriterion -from .modeling.matcher import HungarianMatcher - - -@META_ARCH_REGISTRY.register() -class MaskFormer(nn.Module): - """ - Main class for mask classification semantic segmentation architectures. - """ - - @configurable - def __init__( - self, - *, - backbone: Backbone, - sem_seg_head: nn.Module, - criterion: nn.Module, - num_queries: int, - object_mask_threshold: float, - overlap_threshold: float, - metadata, - size_divisibility: int, - sem_seg_postprocess_before_inference: bool, - pixel_mean: Tuple[float], - pixel_std: Tuple[float], - # inference - semantic_on: bool, - panoptic_on: bool, - instance_on: bool, - test_topk_per_image: int, - ): - """ - Args: - backbone: a backbone module, must follow detectron2's backbone interface - sem_seg_head: a module that predicts semantic segmentation from backbone features - criterion: a module that defines the loss - num_queries: int, number of queries - object_mask_threshold: float, threshold to filter query based on classification score - for panoptic segmentation inference - overlap_threshold: overlap threshold used in general inference for panoptic segmentation - metadata: dataset meta, get `thing` and `stuff` category names for panoptic - segmentation inference - size_divisibility: Some backbones require the input height and width to be divisible by a - specific integer. We can use this to override such requirement. - sem_seg_postprocess_before_inference: whether to resize the prediction back - to original input size before semantic segmentation inference or after. - For high-resolution dataset like Mapillary, resizing predictions before - inference will cause OOM error. - pixel_mean, pixel_std: list or tuple with #channels element, representing - the per-channel mean and std to be used to normalize the input image - semantic_on: bool, whether to output semantic segmentation prediction - instance_on: bool, whether to output instance segmentation prediction - panoptic_on: bool, whether to output panoptic segmentation prediction - test_topk_per_image: int, instance segmentation parameter, keep topk instances per image - """ - super().__init__() - self.backbone = backbone - self.sem_seg_head = sem_seg_head - self.criterion = criterion - self.num_queries = num_queries - self.overlap_threshold = overlap_threshold - self.object_mask_threshold = object_mask_threshold - self.metadata = metadata - if size_divisibility < 0: - # use backbone size_divisibility if not set - size_divisibility = self.backbone.size_divisibility - self.size_divisibility = size_divisibility - self.sem_seg_postprocess_before_inference = sem_seg_postprocess_before_inference - self.register_buffer("pixel_mean", torch.Tensor(pixel_mean).view(-1, 1, 1), False) - self.register_buffer("pixel_std", torch.Tensor(pixel_std).view(-1, 1, 1), False) - - # additional args - self.semantic_on = semantic_on - self.instance_on = instance_on - self.panoptic_on = panoptic_on - self.test_topk_per_image = test_topk_per_image - - if not self.semantic_on: - assert self.sem_seg_postprocess_before_inference - - @classmethod - def from_config(cls, cfg): - backbone = build_backbone(cfg) - sem_seg_head = build_sem_seg_head(cfg, backbone.output_shape()) - - # Loss parameters: - deep_supervision = cfg.MODEL.MASK_FORMER.DEEP_SUPERVISION - no_object_weight = cfg.MODEL.MASK_FORMER.NO_OBJECT_WEIGHT - - # loss weights - class_weight = cfg.MODEL.MASK_FORMER.CLASS_WEIGHT - dice_weight = cfg.MODEL.MASK_FORMER.DICE_WEIGHT - mask_weight = cfg.MODEL.MASK_FORMER.MASK_WEIGHT - - # building criterion - matcher = HungarianMatcher( - cost_class=class_weight, - cost_mask=mask_weight, - cost_dice=dice_weight, - num_points=cfg.MODEL.MASK_FORMER.TRAIN_NUM_POINTS, - ) - - weight_dict = {"loss_ce": class_weight, "loss_mask": mask_weight, "loss_dice": dice_weight} - - if deep_supervision: - dec_layers = cfg.MODEL.MASK_FORMER.DEC_LAYERS - aux_weight_dict = {} - for i in range(dec_layers - 1): - aux_weight_dict.update({k + f"_{i}": v for k, v in weight_dict.items()}) - weight_dict.update(aux_weight_dict) - - losses = ["labels", "masks"] - - criterion = SetCriterion( - sem_seg_head.num_classes, - matcher=matcher, - weight_dict=weight_dict, - eos_coef=no_object_weight, - losses=losses, - num_points=cfg.MODEL.MASK_FORMER.TRAIN_NUM_POINTS, - oversample_ratio=cfg.MODEL.MASK_FORMER.OVERSAMPLE_RATIO, - importance_sample_ratio=cfg.MODEL.MASK_FORMER.IMPORTANCE_SAMPLE_RATIO, - ) - - return { - "backbone": backbone, - "sem_seg_head": sem_seg_head, - "criterion": criterion, - "num_queries": cfg.MODEL.MASK_FORMER.NUM_OBJECT_QUERIES, - "object_mask_threshold": cfg.MODEL.MASK_FORMER.TEST.OBJECT_MASK_THRESHOLD, - "overlap_threshold": cfg.MODEL.MASK_FORMER.TEST.OVERLAP_THRESHOLD, - "metadata": MetadataCatalog.get(cfg.DATASETS.TRAIN[0]), - "size_divisibility": cfg.MODEL.MASK_FORMER.SIZE_DIVISIBILITY, - "sem_seg_postprocess_before_inference": ( - cfg.MODEL.MASK_FORMER.TEST.SEM_SEG_POSTPROCESSING_BEFORE_INFERENCE - or cfg.MODEL.MASK_FORMER.TEST.PANOPTIC_ON - or cfg.MODEL.MASK_FORMER.TEST.INSTANCE_ON - ), - "pixel_mean": cfg.MODEL.PIXEL_MEAN, - "pixel_std": cfg.MODEL.PIXEL_STD, - # inference - "semantic_on": cfg.MODEL.MASK_FORMER.TEST.SEMANTIC_ON, - "instance_on": cfg.MODEL.MASK_FORMER.TEST.INSTANCE_ON, - "panoptic_on": cfg.MODEL.MASK_FORMER.TEST.PANOPTIC_ON, - "test_topk_per_image": cfg.TEST.DETECTIONS_PER_IMAGE, - } - - @property - def device(self): - return self.pixel_mean.device - - def forward(self, batched_inputs): - """ - Args: - batched_inputs: a list, batched outputs of :class:`DatasetMapper`. - Each item in the list contains the inputs for one image. - For now, each item in the list is a dict that contains: - * "image": Tensor, image in (C, H, W) format. - * "instances": per-region ground truth - * Other information that's included in the original dicts, such as: - "height", "width" (int): the output resolution of the model (may be different - from input resolution), used in inference. - Returns: - list[dict]: - each dict has the results for one image. The dict contains the following keys: - - * "sem_seg": - A Tensor that represents the - per-pixel segmentation prediced by the head. - The prediction has shape KxHxW that represents the logits of - each class for each pixel. - * "panoptic_seg": - A tuple that represent panoptic output - panoptic_seg (Tensor): of shape (height, width) where the values are ids for each segment. - segments_info (list[dict]): Describe each segment in `panoptic_seg`. - Each dict contains keys "id", "category_id", "isthing". - """ - images = [x["image"].to(self.device) for x in batched_inputs] - images = [(x - self.pixel_mean) / self.pixel_std for x in images] - images = ImageList.from_tensors(images, self.size_divisibility) - - features = self.backbone(images.tensor) - outputs = self.sem_seg_head(features) - - if self.training: - # mask classification target - if "instances" in batched_inputs[0]: - gt_instances = [x["instances"].to(self.device) for x in batched_inputs] - targets = self.prepare_targets(gt_instances, images) - else: - targets = None - - # bipartite matching-based loss - losses = self.criterion(outputs, targets) - - for k in list(losses.keys()): - if k in self.criterion.weight_dict: - losses[k] *= self.criterion.weight_dict[k] - else: - # remove this loss if not specified in `weight_dict` - losses.pop(k) - return losses - else: - mask_cls_results = outputs["pred_logits"] - mask_pred_results = outputs["pred_masks"] - # upsample masks - mask_pred_results = F.interpolate( - mask_pred_results, - size=(images.tensor.shape[-2], images.tensor.shape[-1]), - mode="bilinear", - align_corners=False, - ) - - del outputs - - processed_results = [] - for mask_cls_result, mask_pred_result, input_per_image, image_size in zip( - mask_cls_results, mask_pred_results, batched_inputs, images.image_sizes - ): - height = input_per_image.get("height", image_size[0]) - width = input_per_image.get("width", image_size[1]) - processed_results.append({}) - - if self.sem_seg_postprocess_before_inference: - mask_pred_result = retry_if_cuda_oom(sem_seg_postprocess)( - mask_pred_result, image_size, height, width - ) - mask_cls_result = mask_cls_result.to(mask_pred_result) - - # semantic segmentation inference - if self.semantic_on: - r = retry_if_cuda_oom(self.semantic_inference)(mask_cls_result, mask_pred_result) - if not self.sem_seg_postprocess_before_inference: - r = retry_if_cuda_oom(sem_seg_postprocess)(r, image_size, height, width) - processed_results[-1]["sem_seg"] = r - - # panoptic segmentation inference - if self.panoptic_on: - panoptic_r = retry_if_cuda_oom(self.panoptic_inference)(mask_cls_result, mask_pred_result) - processed_results[-1]["panoptic_seg"] = panoptic_r - - # instance segmentation inference - if self.instance_on: - instance_r = retry_if_cuda_oom(self.instance_inference)(mask_cls_result, mask_pred_result) - processed_results[-1]["instances"] = instance_r - - return processed_results - - def prepare_targets(self, targets, images): - h_pad, w_pad = images.tensor.shape[-2:] - new_targets = [] - for targets_per_image in targets: - # pad gt - gt_masks = targets_per_image.gt_masks - padded_masks = torch.zeros((gt_masks.shape[0], h_pad, w_pad), dtype=gt_masks.dtype, device=gt_masks.device) - padded_masks[:, : gt_masks.shape[1], : gt_masks.shape[2]] = gt_masks - new_targets.append( - { - "labels": targets_per_image.gt_classes, - "masks": padded_masks, - } - ) - return new_targets - - def semantic_inference(self, mask_cls, mask_pred): - mask_cls = F.softmax(mask_cls, dim=-1)[..., :-1] - mask_pred = mask_pred.sigmoid() - semseg = torch.einsum("qc,qhw->chw", mask_cls, mask_pred) - return semseg - - def panoptic_inference(self, mask_cls, mask_pred): - scores, labels = F.softmax(mask_cls, dim=-1).max(-1) - mask_pred = mask_pred.sigmoid() - - keep = labels.ne(self.sem_seg_head.num_classes) & (scores > self.object_mask_threshold) - cur_scores = scores[keep] - cur_classes = labels[keep] - cur_masks = mask_pred[keep] - cur_mask_cls = mask_cls[keep] - cur_mask_cls = cur_mask_cls[:, :-1] - - cur_prob_masks = cur_scores.view(-1, 1, 1) * cur_masks - - h, w = cur_masks.shape[-2:] - panoptic_seg = torch.zeros((h, w), dtype=torch.int32, device=cur_masks.device) - segments_info = [] - - current_segment_id = 0 - - if cur_masks.shape[0] == 0: - # We didn't detect any mask :( - return panoptic_seg, segments_info - else: - # take argmax - cur_mask_ids = cur_prob_masks.argmax(0) - stuff_memory_list = {} - for k in range(cur_classes.shape[0]): - pred_class = cur_classes[k].item() - isthing = pred_class in self.metadata.thing_dataset_id_to_contiguous_id.values() - mask_area = (cur_mask_ids == k).sum().item() - original_area = (cur_masks[k] >= 0.5).sum().item() - mask = (cur_mask_ids == k) & (cur_masks[k] >= 0.5) - - if mask_area > 0 and original_area > 0 and mask.sum().item() > 0: - if mask_area / original_area < self.overlap_threshold: - continue - - # merge stuff regions - if not isthing: - if int(pred_class) in stuff_memory_list.keys(): - panoptic_seg[mask] = stuff_memory_list[int(pred_class)] - continue - else: - stuff_memory_list[int(pred_class)] = current_segment_id + 1 - - current_segment_id += 1 - panoptic_seg[mask] = current_segment_id - - segments_info.append( - { - "id": current_segment_id, - "isthing": bool(isthing), - "category_id": int(pred_class), - } - ) - - return panoptic_seg, segments_info - - def instance_inference(self, mask_cls, mask_pred): - # mask_pred is already processed to have the same shape as original input - image_size = mask_pred.shape[-2:] - - # [Q, K] - scores = F.softmax(mask_cls, dim=-1)[:, :-1] - labels = torch.arange(self.sem_seg_head.num_classes, device=self.device).unsqueeze(0).repeat(self.num_queries, 1).flatten(0, 1) - # scores_per_image, topk_indices = scores.flatten(0, 1).topk(self.num_queries, sorted=False) - scores_per_image, topk_indices = scores.flatten(0, 1).topk(self.test_topk_per_image, sorted=False) - labels_per_image = labels[topk_indices] - - topk_indices = topk_indices // self.sem_seg_head.num_classes - # mask_pred = mask_pred.unsqueeze(1).repeat(1, self.sem_seg_head.num_classes, 1).flatten(0, 1) - mask_pred = mask_pred[topk_indices] - - # if this is panoptic segmentation, we only keep the "thing" classes - if self.panoptic_on: - keep = torch.zeros_like(scores_per_image).bool() - for i, lab in enumerate(labels_per_image): - keep[i] = lab in self.metadata.thing_dataset_id_to_contiguous_id.values() - - scores_per_image = scores_per_image[keep] - labels_per_image = labels_per_image[keep] - mask_pred = mask_pred[keep] - - result = Instances(image_size) - # mask (before sigmoid) - result.pred_masks = (mask_pred > 0).float() - result.pred_boxes = Boxes(torch.zeros(mask_pred.size(0), 4)) - # Uncomment the following to get boxes from masks (this is slow) - # result.pred_boxes = BitMasks(mask_pred > 0).get_bounding_boxes() - - # calculate average mask prob - mask_scores_per_image = (mask_pred.sigmoid().flatten(1) * result.pred_masks.flatten(1)).sum(1) / (result.pred_masks.flatten(1).sum(1) + 1e-6) - result.scores = scores_per_image * mask_scores_per_image - result.pred_classes = labels_per_image - return result diff --git a/spaces/EXPOSUREEE/Ai-Image-Enhancer/scripts/extract_subimages.py b/spaces/EXPOSUREEE/Ai-Image-Enhancer/scripts/extract_subimages.py deleted file mode 100644 index 9b969ae0d4adff403f2ad362b9afaaaee58e2cef..0000000000000000000000000000000000000000 --- a/spaces/EXPOSUREEE/Ai-Image-Enhancer/scripts/extract_subimages.py +++ /dev/null @@ -1,135 +0,0 @@ -import argparse -import cv2 -import numpy as np -import os -import sys -from basicsr.utils import scandir -from multiprocessing import Pool -from os import path as osp -from tqdm import tqdm - - -def main(args): - """A multi-thread tool to crop large images to sub-images for faster IO. - - opt (dict): Configuration dict. It contains: - n_thread (int): Thread number. - compression_level (int): CV_IMWRITE_PNG_COMPRESSION from 0 to 9. A higher value means a smaller size - and longer compression time. Use 0 for faster CPU decompression. Default: 3, same in cv2. - input_folder (str): Path to the input folder. - save_folder (str): Path to save folder. - crop_size (int): Crop size. - step (int): Step for overlapped sliding window. - thresh_size (int): Threshold size. Patches whose size is lower than thresh_size will be dropped. - - Usage: - For each folder, run this script. - Typically, there are GT folder and LQ folder to be processed for DIV2K dataset. - After process, each sub_folder should have the same number of subimages. - Remember to modify opt configurations according to your settings. - """ - - opt = {} - opt['n_thread'] = args.n_thread - opt['compression_level'] = args.compression_level - opt['input_folder'] = args.input - opt['save_folder'] = args.output - opt['crop_size'] = args.crop_size - opt['step'] = args.step - opt['thresh_size'] = args.thresh_size - extract_subimages(opt) - - -def extract_subimages(opt): - """Crop images to subimages. - - Args: - opt (dict): Configuration dict. It contains: - input_folder (str): Path to the input folder. - save_folder (str): Path to save folder. - n_thread (int): Thread number. - """ - input_folder = opt['input_folder'] - save_folder = opt['save_folder'] - if not osp.exists(save_folder): - os.makedirs(save_folder) - print(f'mkdir {save_folder} ...') - else: - print(f'Folder {save_folder} already exists. Exit.') - sys.exit(1) - - # scan all images - img_list = list(scandir(input_folder, full_path=True)) - - pbar = tqdm(total=len(img_list), unit='image', desc='Extract') - pool = Pool(opt['n_thread']) - for path in img_list: - pool.apply_async(worker, args=(path, opt), callback=lambda arg: pbar.update(1)) - pool.close() - pool.join() - pbar.close() - print('All processes done.') - - -def worker(path, opt): - """Worker for each process. - - Args: - path (str): Image path. - opt (dict): Configuration dict. It contains: - crop_size (int): Crop size. - step (int): Step for overlapped sliding window. - thresh_size (int): Threshold size. Patches whose size is lower than thresh_size will be dropped. - save_folder (str): Path to save folder. - compression_level (int): for cv2.IMWRITE_PNG_COMPRESSION. - - Returns: - process_info (str): Process information displayed in progress bar. - """ - crop_size = opt['crop_size'] - step = opt['step'] - thresh_size = opt['thresh_size'] - img_name, extension = osp.splitext(osp.basename(path)) - - # remove the x2, x3, x4 and x8 in the filename for DIV2K - img_name = img_name.replace('x2', '').replace('x3', '').replace('x4', '').replace('x8', '') - - img = cv2.imread(path, cv2.IMREAD_UNCHANGED) - - h, w = img.shape[0:2] - h_space = np.arange(0, h - crop_size + 1, step) - if h - (h_space[-1] + crop_size) > thresh_size: - h_space = np.append(h_space, h - crop_size) - w_space = np.arange(0, w - crop_size + 1, step) - if w - (w_space[-1] + crop_size) > thresh_size: - w_space = np.append(w_space, w - crop_size) - - index = 0 - for x in h_space: - for y in w_space: - index += 1 - cropped_img = img[x:x + crop_size, y:y + crop_size, ...] - cropped_img = np.ascontiguousarray(cropped_img) - cv2.imwrite( - osp.join(opt['save_folder'], f'{img_name}_s{index:03d}{extension}'), cropped_img, - [cv2.IMWRITE_PNG_COMPRESSION, opt['compression_level']]) - process_info = f'Processing {img_name} ...' - return process_info - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--input', type=str, default='datasets/DF2K/DF2K_HR', help='Input folder') - parser.add_argument('--output', type=str, default='datasets/DF2K/DF2K_HR_sub', help='Output folder') - parser.add_argument('--crop_size', type=int, default=480, help='Crop size') - parser.add_argument('--step', type=int, default=240, help='Step for overlapped sliding window') - parser.add_argument( - '--thresh_size', - type=int, - default=0, - help='Threshold size. Patches whose size is lower than thresh_size will be dropped.') - parser.add_argument('--n_thread', type=int, default=20, help='Thread number.') - parser.add_argument('--compression_level', type=int, default=3, help='Compression level') - args = parser.parse_args() - - main(args) diff --git a/spaces/Eddycrack864/Applio-Inference/demucs/tasnet.py b/spaces/Eddycrack864/Applio-Inference/demucs/tasnet.py deleted file mode 100644 index ecc1257925ea8f4fbe389ddd6d73ce9fdf45f6d4..0000000000000000000000000000000000000000 --- a/spaces/Eddycrack864/Applio-Inference/demucs/tasnet.py +++ /dev/null @@ -1,452 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# -# Created on 2018/12 -# Author: Kaituo XU -# Modified on 2019/11 by Alexandre Defossez, added support for multiple output channels -# Here is the original license: -# The MIT License (MIT) -# -# Copyright (c) 2018 Kaituo XU -# -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: -# -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .utils import capture_init - -EPS = 1e-8 - - -def overlap_and_add(signal, frame_step): - outer_dimensions = signal.size()[:-2] - frames, frame_length = signal.size()[-2:] - - subframe_length = math.gcd(frame_length, frame_step) # gcd=Greatest Common Divisor - subframe_step = frame_step // subframe_length - subframes_per_frame = frame_length // subframe_length - output_size = frame_step * (frames - 1) + frame_length - output_subframes = output_size // subframe_length - - subframe_signal = signal.view(*outer_dimensions, -1, subframe_length) - - frame = torch.arange(0, output_subframes, - device=signal.device).unfold(0, subframes_per_frame, subframe_step) - frame = frame.long() # signal may in GPU or CPU - frame = frame.contiguous().view(-1) - - result = signal.new_zeros(*outer_dimensions, output_subframes, subframe_length) - result.index_add_(-2, frame, subframe_signal) - result = result.view(*outer_dimensions, -1) - return result - - -class ConvTasNet(nn.Module): - @capture_init - def __init__(self, - sources, - N=256, - L=20, - B=256, - H=512, - P=3, - X=8, - R=4, - audio_channels=2, - norm_type="gLN", - causal=False, - mask_nonlinear='relu', - samplerate=44100, - segment_length=44100 * 2 * 4): - """ - Args: - sources: list of sources - N: Number of filters in autoencoder - L: Length of the filters (in samples) - B: Number of channels in bottleneck 1 × 1-conv block - H: Number of channels in convolutional blocks - P: Kernel size in convolutional blocks - X: Number of convolutional blocks in each repeat - R: Number of repeats - norm_type: BN, gLN, cLN - causal: causal or non-causal - mask_nonlinear: use which non-linear function to generate mask - """ - super(ConvTasNet, self).__init__() - # Hyper-parameter - self.sources = sources - self.C = len(sources) - self.N, self.L, self.B, self.H, self.P, self.X, self.R = N, L, B, H, P, X, R - self.norm_type = norm_type - self.causal = causal - self.mask_nonlinear = mask_nonlinear - self.audio_channels = audio_channels - self.samplerate = samplerate - self.segment_length = segment_length - # Components - self.encoder = Encoder(L, N, audio_channels) - self.separator = TemporalConvNet( - N, B, H, P, X, R, self.C, norm_type, causal, mask_nonlinear) - self.decoder = Decoder(N, L, audio_channels) - # init - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_normal_(p) - - def valid_length(self, length): - return length - - def forward(self, mixture): - """ - Args: - mixture: [M, T], M is batch size, T is #samples - Returns: - est_source: [M, C, T] - """ - mixture_w = self.encoder(mixture) - est_mask = self.separator(mixture_w) - est_source = self.decoder(mixture_w, est_mask) - - # T changed after conv1d in encoder, fix it here - T_origin = mixture.size(-1) - T_conv = est_source.size(-1) - est_source = F.pad(est_source, (0, T_origin - T_conv)) - return est_source - - -class Encoder(nn.Module): - """Estimation of the nonnegative mixture weight by a 1-D conv layer. - """ - def __init__(self, L, N, audio_channels): - super(Encoder, self).__init__() - # Hyper-parameter - self.L, self.N = L, N - # Components - # 50% overlap - self.conv1d_U = nn.Conv1d(audio_channels, N, kernel_size=L, stride=L // 2, bias=False) - - def forward(self, mixture): - """ - Args: - mixture: [M, T], M is batch size, T is #samples - Returns: - mixture_w: [M, N, K], where K = (T-L)/(L/2)+1 = 2T/L-1 - """ - mixture_w = F.relu(self.conv1d_U(mixture)) # [M, N, K] - return mixture_w - - -class Decoder(nn.Module): - def __init__(self, N, L, audio_channels): - super(Decoder, self).__init__() - # Hyper-parameter - self.N, self.L = N, L - self.audio_channels = audio_channels - # Components - self.basis_signals = nn.Linear(N, audio_channels * L, bias=False) - - def forward(self, mixture_w, est_mask): - """ - Args: - mixture_w: [M, N, K] - est_mask: [M, C, N, K] - Returns: - est_source: [M, C, T] - """ - # D = W * M - source_w = torch.unsqueeze(mixture_w, 1) * est_mask # [M, C, N, K] - source_w = torch.transpose(source_w, 2, 3) # [M, C, K, N] - # S = DV - est_source = self.basis_signals(source_w) # [M, C, K, ac * L] - m, c, k, _ = est_source.size() - est_source = est_source.view(m, c, k, self.audio_channels, -1).transpose(2, 3).contiguous() - est_source = overlap_and_add(est_source, self.L // 2) # M x C x ac x T - return est_source - - -class TemporalConvNet(nn.Module): - def __init__(self, N, B, H, P, X, R, C, norm_type="gLN", causal=False, mask_nonlinear='relu'): - """ - Args: - N: Number of filters in autoencoder - B: Number of channels in bottleneck 1 × 1-conv block - H: Number of channels in convolutional blocks - P: Kernel size in convolutional blocks - X: Number of convolutional blocks in each repeat - R: Number of repeats - C: Number of speakers - norm_type: BN, gLN, cLN - causal: causal or non-causal - mask_nonlinear: use which non-linear function to generate mask - """ - super(TemporalConvNet, self).__init__() - # Hyper-parameter - self.C = C - self.mask_nonlinear = mask_nonlinear - # Components - # [M, N, K] -> [M, N, K] - layer_norm = ChannelwiseLayerNorm(N) - # [M, N, K] -> [M, B, K] - bottleneck_conv1x1 = nn.Conv1d(N, B, 1, bias=False) - # [M, B, K] -> [M, B, K] - repeats = [] - for r in range(R): - blocks = [] - for x in range(X): - dilation = 2**x - padding = (P - 1) * dilation if causal else (P - 1) * dilation // 2 - blocks += [ - TemporalBlock(B, - H, - P, - stride=1, - padding=padding, - dilation=dilation, - norm_type=norm_type, - causal=causal) - ] - repeats += [nn.Sequential(*blocks)] - temporal_conv_net = nn.Sequential(*repeats) - # [M, B, K] -> [M, C*N, K] - mask_conv1x1 = nn.Conv1d(B, C * N, 1, bias=False) - # Put together - self.network = nn.Sequential(layer_norm, bottleneck_conv1x1, temporal_conv_net, - mask_conv1x1) - - def forward(self, mixture_w): - """ - Keep this API same with TasNet - Args: - mixture_w: [M, N, K], M is batch size - returns: - est_mask: [M, C, N, K] - """ - M, N, K = mixture_w.size() - score = self.network(mixture_w) # [M, N, K] -> [M, C*N, K] - score = score.view(M, self.C, N, K) # [M, C*N, K] -> [M, C, N, K] - if self.mask_nonlinear == 'softmax': - est_mask = F.softmax(score, dim=1) - elif self.mask_nonlinear == 'relu': - est_mask = F.relu(score) - else: - raise ValueError("Unsupported mask non-linear function") - return est_mask - - -class TemporalBlock(nn.Module): - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride, - padding, - dilation, - norm_type="gLN", - causal=False): - super(TemporalBlock, self).__init__() - # [M, B, K] -> [M, H, K] - conv1x1 = nn.Conv1d(in_channels, out_channels, 1, bias=False) - prelu = nn.PReLU() - norm = chose_norm(norm_type, out_channels) - # [M, H, K] -> [M, B, K] - dsconv = DepthwiseSeparableConv(out_channels, in_channels, kernel_size, stride, padding, - dilation, norm_type, causal) - # Put together - self.net = nn.Sequential(conv1x1, prelu, norm, dsconv) - - def forward(self, x): - """ - Args: - x: [M, B, K] - Returns: - [M, B, K] - """ - residual = x - out = self.net(x) - # TODO: when P = 3 here works fine, but when P = 2 maybe need to pad? - return out + residual # look like w/o F.relu is better than w/ F.relu - # return F.relu(out + residual) - - -class DepthwiseSeparableConv(nn.Module): - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride, - padding, - dilation, - norm_type="gLN", - causal=False): - super(DepthwiseSeparableConv, self).__init__() - # Use `groups` option to implement depthwise convolution - # [M, H, K] -> [M, H, K] - depthwise_conv = nn.Conv1d(in_channels, - in_channels, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=in_channels, - bias=False) - if causal: - chomp = Chomp1d(padding) - prelu = nn.PReLU() - norm = chose_norm(norm_type, in_channels) - # [M, H, K] -> [M, B, K] - pointwise_conv = nn.Conv1d(in_channels, out_channels, 1, bias=False) - # Put together - if causal: - self.net = nn.Sequential(depthwise_conv, chomp, prelu, norm, pointwise_conv) - else: - self.net = nn.Sequential(depthwise_conv, prelu, norm, pointwise_conv) - - def forward(self, x): - """ - Args: - x: [M, H, K] - Returns: - result: [M, B, K] - """ - return self.net(x) - - -class Chomp1d(nn.Module): - """To ensure the output length is the same as the input. - """ - def __init__(self, chomp_size): - super(Chomp1d, self).__init__() - self.chomp_size = chomp_size - - def forward(self, x): - """ - Args: - x: [M, H, Kpad] - Returns: - [M, H, K] - """ - return x[:, :, :-self.chomp_size].contiguous() - - -def chose_norm(norm_type, channel_size): - """The input of normlization will be (M, C, K), where M is batch size, - C is channel size and K is sequence length. - """ - if norm_type == "gLN": - return GlobalLayerNorm(channel_size) - elif norm_type == "cLN": - return ChannelwiseLayerNorm(channel_size) - elif norm_type == "id": - return nn.Identity() - else: # norm_type == "BN": - # Given input (M, C, K), nn.BatchNorm1d(C) will accumulate statics - # along M and K, so this BN usage is right. - return nn.BatchNorm1d(channel_size) - - -# TODO: Use nn.LayerNorm to impl cLN to speed up -class ChannelwiseLayerNorm(nn.Module): - """Channel-wise Layer Normalization (cLN)""" - def __init__(self, channel_size): - super(ChannelwiseLayerNorm, self).__init__() - self.gamma = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1] - self.beta = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1] - self.reset_parameters() - - def reset_parameters(self): - self.gamma.data.fill_(1) - self.beta.data.zero_() - - def forward(self, y): - """ - Args: - y: [M, N, K], M is batch size, N is channel size, K is length - Returns: - cLN_y: [M, N, K] - """ - mean = torch.mean(y, dim=1, keepdim=True) # [M, 1, K] - var = torch.var(y, dim=1, keepdim=True, unbiased=False) # [M, 1, K] - cLN_y = self.gamma * (y - mean) / torch.pow(var + EPS, 0.5) + self.beta - return cLN_y - - -class GlobalLayerNorm(nn.Module): - """Global Layer Normalization (gLN)""" - def __init__(self, channel_size): - super(GlobalLayerNorm, self).__init__() - self.gamma = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1] - self.beta = nn.Parameter(torch.Tensor(1, channel_size, 1)) # [1, N, 1] - self.reset_parameters() - - def reset_parameters(self): - self.gamma.data.fill_(1) - self.beta.data.zero_() - - def forward(self, y): - """ - Args: - y: [M, N, K], M is batch size, N is channel size, K is length - Returns: - gLN_y: [M, N, K] - """ - # TODO: in torch 1.0, torch.mean() support dim list - mean = y.mean(dim=1, keepdim=True).mean(dim=2, keepdim=True) # [M, 1, 1] - var = (torch.pow(y - mean, 2)).mean(dim=1, keepdim=True).mean(dim=2, keepdim=True) - gLN_y = self.gamma * (y - mean) / torch.pow(var + EPS, 0.5) + self.beta - return gLN_y - - -if __name__ == "__main__": - torch.manual_seed(123) - M, N, L, T = 2, 3, 4, 12 - K = 2 * T // L - 1 - B, H, P, X, R, C, norm_type, causal = 2, 3, 3, 3, 2, 2, "gLN", False - mixture = torch.randint(3, (M, T)) - # test Encoder - encoder = Encoder(L, N) - encoder.conv1d_U.weight.data = torch.randint(2, encoder.conv1d_U.weight.size()) - mixture_w = encoder(mixture) - print('mixture', mixture) - print('U', encoder.conv1d_U.weight) - print('mixture_w', mixture_w) - print('mixture_w size', mixture_w.size()) - - # test TemporalConvNet - separator = TemporalConvNet(N, B, H, P, X, R, C, norm_type=norm_type, causal=causal) - est_mask = separator(mixture_w) - print('est_mask', est_mask) - - # test Decoder - decoder = Decoder(N, L) - est_mask = torch.randint(2, (B, K, C, N)) - est_source = decoder(mixture_w, est_mask) - print('est_source', est_source) - - # test Conv-TasNet - conv_tasnet = ConvTasNet(N, L, B, H, P, X, R, C, norm_type=norm_type) - est_source = conv_tasnet(mixture) - print('est_source', est_source) - print('est_source size', est_source.size()) diff --git a/spaces/Eddycrack864/Applio-Inference/infer/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py b/spaces/Eddycrack864/Applio-Inference/infer/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py deleted file mode 100644 index 55abcfdb87636a9ee85b8df5cdc1bec64098b5da..0000000000000000000000000000000000000000 --- a/spaces/Eddycrack864/Applio-Inference/infer/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py +++ /dev/null @@ -1,91 +0,0 @@ -import numpy as np -import pyworld - -from infer.lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor - - -class DioF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def resize_f0(self, x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * target_len, len(source)) / target_len, - np.arange(0, len(source)), - source, - ) - res = np.nan_to_num(target) - return res - - def compute_f0(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.dio( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return self.interpolate_f0(self.resize_f0(f0, p_len))[0] - - def compute_f0_uv(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.dio( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return self.interpolate_f0(self.resize_f0(f0, p_len)) diff --git a/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_pipelines/drrg_pipeline.py b/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_pipelines/drrg_pipeline.py deleted file mode 100644 index 09189b51cda03d4557d58f5193366caeaf71bcc9..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/mmocr-demo/configs/_base_/det_pipelines/drrg_pipeline.py +++ /dev/null @@ -1,60 +0,0 @@ -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) - -train_pipeline = [ - dict(type='LoadImageFromFile', color_type='color_ignore_orientation'), - dict( - type='LoadTextAnnotations', - with_bbox=True, - with_mask=True, - poly2mask=False), - dict(type='ColorJitter', brightness=32.0 / 255, saturation=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='RandomScaling', size=800, scale=(0.75, 2.5)), - dict( - type='RandomCropFlip', crop_ratio=0.5, iter_num=1, min_area_ratio=0.2), - dict( - type='RandomCropPolyInstances', - instance_key='gt_masks', - crop_ratio=0.8, - min_side_ratio=0.3), - dict( - type='RandomRotatePolyInstances', - rotate_ratio=0.5, - max_angle=60, - pad_with_fixed_color=False), - dict(type='SquareResizePad', target_size=800, pad_ratio=0.6), - dict(type='RandomFlip', flip_ratio=0.5, direction='horizontal'), - dict(type='DRRGTargets'), - dict(type='Pad', size_divisor=32), - dict( - type='CustomFormatBundle', - keys=[ - 'gt_text_mask', 'gt_center_region_mask', 'gt_mask', - 'gt_top_height_map', 'gt_bot_height_map', 'gt_sin_map', - 'gt_cos_map', 'gt_comp_attribs' - ], - visualize=dict(flag=False, boundary_key='gt_text_mask')), - dict( - type='Collect', - keys=[ - 'img', 'gt_text_mask', 'gt_center_region_mask', 'gt_mask', - 'gt_top_height_map', 'gt_bot_height_map', 'gt_sin_map', - 'gt_cos_map', 'gt_comp_attribs' - ]) -] - -test_pipeline = [ - dict(type='LoadImageFromFile', color_type='color_ignore_orientation'), - dict( - type='MultiScaleFlipAug', - img_scale=(1024, 640), # used by Resize - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] diff --git a/spaces/Feraxin/chatGPT/app.py b/spaces/Feraxin/chatGPT/app.py deleted file mode 100644 index 65fd6e87079f9fdf0d45398b70cd8ad70a74a867..0000000000000000000000000000000000000000 --- a/spaces/Feraxin/chatGPT/app.py +++ /dev/null @@ -1,520 +0,0 @@ -import os, sys, json -os.system("pip install gradio==3.19.1") -import openai -import gradio as gr - -from loguru import logger -import paddlehub as hub -import random -from encoder import get_encoder - -openai.api_key = os.getenv("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") - -from utils import get_tmt_client, getTextTrans_tmt -tmt_client = get_tmt_client() - -def getTextTrans(text, source='zh', target='en'): - def is_chinese(string): - for ch in string: - if u'\u4e00' <= ch <= u'\u9fff': - return True - return False - - if not is_chinese(text) and target == 'en': - return text - - try: - text_translation = getTextTrans_tmt(tmt_client, text, source, target) - return text_translation - except Exception as e: - return text - -start_work = """async() => { - function isMobile() { - try { - document.createEvent("TouchEvent"); return true; - } catch(e) { - return false; - } - } - function getClientHeight() - { - var clientHeight=0; - if(document.body.clientHeight&&document.documentElement.clientHeight) { - var clientHeight = (document.body.clientHeightdocument.documentElement.clientHeight)?document.body.clientHeight:document.documentElement.clientHeight; - } - return clientHeight; - } - - function setNativeValue(element, value) { - const valueSetter = Object.getOwnPropertyDescriptor(element.__proto__, 'value').set; - const prototype = Object.getPrototypeOf(element); - const prototypeValueSetter = Object.getOwnPropertyDescriptor(prototype, 'value').set; - - if (valueSetter && valueSetter !== prototypeValueSetter) { - prototypeValueSetter.call(element, value); - } else { - valueSetter.call(element, value); - } - element.dispatchEvent(new Event('input', { bubbles: true })); - } - function get_clear_innerHTML(innerHTML) { - innerHTML = innerHTML.replace(/

      |<\\/p>|\\n/g, ''); - regexp = /\\★☆(.*?)\\☆★/; - match = innerHTML.match(regexp); - if (match) { - innerHTML = match[1]; - } - return innerHTML; - } - function save_conversation(chatbot) { - var conversations = new Array(); - var conversations_clear = new Array(); - for (var i = 0; i < chatbot.children.length; i++) { - testid_icon = '☟:'; //'user' - if (chatbot.children[i].dataset['testid'] == 'bot') { - testid_icon = '☝:'; //'bot' - } - innerHTML = chatbot.children[i].innerHTML; - conversations.push(testid_icon + innerHTML); - if (innerHTML.indexOf(" 100) { - this_width = 20; - } - img.style.width = this_width + "%"; - img.style.height = img.offsetWidth + 'px'; - } - function load_conversation(chatbot) { - var json_str = localStorage.getItem('chatgpt_conversations'); - if (json_str) { - var conversations_clear = new Array(); - conversations = JSON.parse(json_str); - for (var i = 0; i < conversations.length; i++) { - innerHTML = conversations[i]; - if (innerHTML.indexOf("☝:") == -1) { - className = "message user svelte-134zwfa"; - bgcolor = "#16a34a"; - testid = "user"; - testid_icon = '☟:'; //'user' - } else { - className = "message bot svelte-134zwfa"; - bgcolor = "#2563eb"; - testid = "bot"; - testid_icon = '☝:'; //'bot' - } - var new_div = document.createElement("div"); - new_div.className = className; - new_div.style.backgroundColor = bgcolor; - new_div.dataset.testid = testid; - if (innerHTML.indexOf("data:image/jpeg") >= 0) { - new_div.style.width = "20%"; - new_div.style.padding = "0.2rem"; - new_div.onclick = function(e) { - img_click(this); - } - setTimeout(function(){ - new_div.style.height = new_div.offsetWidth + 'px'; - new_div.children[0].setAttribute('style', 'max-width: none; width:100%'); - }, 10); - } - innerHTML = innerHTML.replace("☝:", ""); - innerHTML = innerHTML.replace("☟:", ""); - new_div.innerHTML = innerHTML; - if (innerHTML.indexOf("null_") != -1) { - new_div.style.display = 'none'; - } - chatbot.appendChild(new_div); - - if (innerHTML.indexOf(" gradio-app').shadowRoot; - if (!gradioEl) { - gradioEl = document.querySelector('body > gradio-app'); - } - - if (typeof window['gradioEl'] === 'undefined') { - window['gradioEl'] = gradioEl; - - const page1 = window['gradioEl'].querySelectorAll('#page_1')[0]; - const page2 = window['gradioEl'].querySelectorAll('#page_2')[0]; - - page1.style.display = "none"; - page2.style.display = "block"; - window['div_count'] = 0; - window['chat_radio_0'] = window['gradioEl'].querySelectorAll('#chat_radio')[0].querySelectorAll('input[name=radio-chat_radio]')[0]; - window['chat_radio_1'] = window['gradioEl'].querySelectorAll('#chat_radio')[0].querySelectorAll('input[name=radio-chat_radio]')[1]; - window['chat_bot'] = window['gradioEl'].querySelectorAll('#chat_bot')[0]; - window['chat_bot1'] = window['gradioEl'].querySelectorAll('#chat_bot1')[0]; - window['my_prompt'] = window['gradioEl'].querySelectorAll('#my_prompt')[0].querySelectorAll('textarea')[0]; - window['my_prompt_en'] = window['gradioEl'].querySelectorAll('#my_prompt_en')[0].querySelectorAll('textarea')[0]; - window['chat_his'] = window['gradioEl'].querySelectorAll('#chat_history')[0].querySelectorAll('textarea')[0]; - chat_row = window['gradioEl'].querySelectorAll('#chat_row')[0]; - prompt_row = window['gradioEl'].querySelectorAll('#prompt_row')[0]; - window['chat_bot1'].children[1].children[0].textContent = ''; - - clientHeight = getClientHeight(); - if (isMobile()) { - output_htmls = window['gradioEl'].querySelectorAll('.output-html'); - for (var i = 0; i < output_htmls.length; i++) { - output_htmls[i].style.display = "none"; - } - new_height = (clientHeight - 250) + 'px'; - } else { - new_height = (clientHeight - 350) + 'px'; - } - chat_row.style.height = new_height; - window['chat_bot'].style.height = new_height; - window['chat_bot'].children[1].style.height = new_height; - window['chat_bot1'].style.height = new_height; - window['chat_bot1'].children[1].style.height = new_height; - window['chat_bot1'].children[0].style.top = (parseInt(window['chat_bot1'].style.height)-window['chat_bot1'].children[0].offsetHeight-2) + 'px'; - - prompt_row.children[0].style.flex = 'auto'; - prompt_row.children[0].style.width = '100%'; - window['gradioEl'].querySelectorAll('#chat_radio')[0].style.flex = 'auto'; - window['gradioEl'].querySelectorAll('#chat_radio')[0].style.width = '100%'; - prompt_row.children[0].setAttribute('style','flex-direction: inherit; flex: 1 1 auto; width: 100%;border-color: green;border-width: 1px !important;') - window['chat_bot1'].children[1].setAttribute('style', 'border-bottom-right-radius:0;top:unset;bottom:0;padding-left:0.1rem'); - window['gradioEl'].querySelectorAll('#btns_row')[0].children[0].setAttribute('style', 'min-width: min(10px, 100%); flex-grow: 1'); - window['gradioEl'].querySelectorAll('#btns_row')[0].children[1].setAttribute('style', 'min-width: min(10px, 100%); flex-grow: 1'); - - load_conversation(window['chat_bot1'].children[1].children[0]); - window['chat_bot1'].children[1].scrollTop = window['chat_bot1'].children[1].scrollHeight; - - window['gradioEl'].querySelectorAll('#clear-btn')[0].onclick = function(e){ - if (confirm('Clear all outputs?')==true) { - for (var i = window['chat_bot'].children[1].children[0].children.length-1; i >= 0; i--) { - window['chat_bot'].children[1].children[0].removeChild(window['chat_bot'].children[1].children[0].children[i]); - } - for (var i = window['chat_bot1'].children[1].children[0].children.length-1; i >= 0; i--) { - window['chat_bot1'].children[1].children[0].removeChild(window['chat_bot1'].children[1].children[0].children[i]); - } - window['div_count'] = 0; - save_conversation(window['chat_bot1'].children[1].children[0]); - } - } - - function set_buttons(action) { - window['submit-btn'].disabled = action; - window['clear-btn'].disabled = action; - window['chat_radio_0'].disabled = action; - window['chat_radio_1'].disabled = action; - btn_color = 'color:#000'; - if (action) { - btn_color = 'color:#ccc'; - } - window['submit-btn'].setAttribute('style', btn_color); - window['clear-btn'].setAttribute('style', btn_color); - window['chat_radio_0'].setAttribute('style', btn_color); - window['chat_radio_1'].setAttribute('style', btn_color); - } - window['prevPrompt'] = ''; - window['doCheckPrompt'] = 0; - window['prevImgSrc'] = ''; - window['checkChange'] = function checkChange() { - try { - if (window['chat_radio_0'].checked) { - dot_flashing = window['chat_bot'].children[1].children[0].querySelectorAll('.dot-flashing'); - - if (window['chat_bot'].children[1].children[0].children.length > window['div_count'] && dot_flashing.length == 0) { - new_len = window['chat_bot'].children[1].children[0].children.length - window['div_count']; - for (var i = 0; i < new_len; i++) { - new_div = window['chat_bot'].children[1].children[0].children[window['div_count'] + i].cloneNode(true); - window['chat_bot1'].children[1].children[0].appendChild(new_div); - } - window['div_count'] = window['chat_bot'].children[1].children[0].children.length; - window['chat_bot1'].children[1].scrollTop = window['chat_bot1'].children[1].scrollHeight; - save_conversation(window['chat_bot1'].children[1].children[0]); - } - if (window['chat_bot'].children[0].children.length > 1) { - set_buttons(true); - window['chat_bot1'].children[0].textContent = window['chat_bot'].children[0].children[1].textContent; - } else { - set_buttons(false); - window['chat_bot1'].children[0].textContent = ''; - } - } else { - img_index = 0; - draw_prompt_en = window['my_prompt_en'].value; - if (window['doCheckPrompt'] == 0 && window['prevPrompt'] != draw_prompt_en) { - - console.log('_____draw_prompt_en___[' + draw_prompt_en + ']_'); - window['doCheckPrompt'] = 1; - window['prevPrompt'] = draw_prompt_en; - - tabitems = window['gradioEl'].querySelectorAll('.tabitem'); - for (var i = 0; i < tabitems.length; i++) { - inputText = tabitems[i].children[0].children[1].children[0].querySelectorAll('input')[0]; - setNativeValue(inputText, draw_prompt_en); - } - setTimeout(function() { - window['draw_prompt'] = window['my_prompt'].value; - btns = window['gradioEl'].querySelectorAll('button'); - for (var i = 0; i < btns.length; i++) { - if (['Generate image','Run'].includes(btns[i].innerText)) { - btns[i].click(); - } - } - window['doCheckPrompt'] = 0; - }, 10); - } - tabitems = window['gradioEl'].querySelectorAll('.tabitem'); - imgs = tabitems[img_index].children[0].children[1].children[1].querySelectorAll("img"); - if (imgs.length > 0) { - if (window['prevImgSrc'] !== imgs[0].src) { - var user_div = document.createElement("div"); - user_div.className = "message user svelte-134zwfa"; - user_div.style.backgroundColor = "#16a34a"; - user_div.dataset.testid = 'user'; - user_div.innerHTML = "

      作画: " + window['draw_prompt'] + "

      "; - window['chat_bot1'].children[1].children[0].appendChild(user_div); - - var bot_div = document.createElement("div"); - bot_div.className = "message bot svelte-134zwfa"; - bot_div.style.backgroundColor = "#2563eb"; - bot_div.style.width = "20%"; - bot_div.dataset.testid = 'bot'; - bot_div.onclick = function(e){ - img_click(this); - } - setTimeout(function(){ - bot_div.style.height = bot_div.offsetWidth + 'px'; - bot_div.children[0].setAttribute('style', 'max-width:none; width:100%'); - }, 10); - bot_div.style.padding = "0.2rem"; - bot_div.appendChild(imgs[0].cloneNode(true)); - window['chat_bot1'].children[1].children[0].appendChild(bot_div); - - window['chat_bot1'].children[1].scrollTop = window['chat_bot1'].children[1].scrollHeight; - window['prevImgSrc'] = imgs[0].src; - save_conversation(window['chat_bot1'].children[1].children[0]); - } - } - if (tabitems[img_index].children[0].children[1].children[1].children[0].children.length > 1) { - tips = tabitems[img_index].children[0].children[1].children[1].children[0].textContent; - if (tips.indexOf("Error") == -1) { - set_buttons(true); - } else { - set_buttons(false); - } - window['chat_bot1'].children[0].textContent = '作画中 ' + tips; - } else { - set_buttons(false); - window['chat_bot1'].children[0].textContent = ''; - } - } - - } catch(e) { - } - } - window['checkChange_interval'] = window.setInterval("window.checkChange()", 500); - } - - return false; -}""" - -space_ids = { - "spaces/stabilityai/stable-diffusion":"Stable Diffusion 2.1", - # "spaces/runwayml/stable-diffusion-v1-5":"Stable Diffusion 1.5", - # "spaces/stabilityai/stable-diffusion-1":"Stable Diffusion 1.0", - } - -tab_actions = [] -tab_titles = [] - -for space_id in space_ids.keys(): - print(space_id, space_ids[space_id]) - try: - tab = gr.Interface.load(space_id) - tab_actions.append(tab) - tab_titles.append(space_ids[space_id]) - except Exception as e: - logger.info(f"load_fail__{space_id}_{e}") - -token_encoder = get_encoder() -total_tokens = 4096 -max_output_tokens = 1024 -max_input_tokens = total_tokens - max_output_tokens - -def set_openai_api_key(api_key): - if api_key and api_key.startswith("sk-") and len(api_key) > 50: - openai.api_key = api_key - -def get_response_from_openai(input, chat_history, model_radio): - error_1 = 'You exceeded your current quota, please check your plan and billing details.' - def openai_create(input_list, model_radio): - try: - # print(f'input_list={input_list}') - input_list_len = len(input_list) - out_prompt = '' - messages = [] - if model_radio == 'GPT-3.0': - out_prompt = 'AI:' - for i in range(input_list_len): - input = input_list[input_list_len-i-1].replace("
      ", '\n\n') - if input.startswith("Openai said:"): - input = "☝:" - - if input.startswith("☝:"): - if model_radio == 'GPT-3.0': - out_prompt = input.replace("☝:", "AI:") + '\n' + out_prompt - else: - out_prompt = input.replace("☝:", "") + out_prompt - messages.insert(0, {"role": "assistant", "content": input.replace("☝:", "")}) - elif input.startswith("☟:"): - if model_radio == 'GPT-3.0': - out_prompt = input.replace("☟:", "Human:") + '\n' + out_prompt - else: - out_prompt = input.replace("☟:", "") + out_prompt - messages.insert(0, {"role": "user", "content": input.replace("☟:", "")}) - tokens = token_encoder.encode(out_prompt) - if len(tokens) > max_input_tokens: - break - - if model_radio == 'GPT-3.0': - # print(out_prompt) - response = openai.Completion.create( - model="text-davinci-003", - prompt=out_prompt, - temperature=0.7, - max_tokens=max_output_tokens, - top_p=1, - frequency_penalty=0, - presence_penalty=0, - stop=[" Human:", " AI:"] - ) - # print(f'response_3.0__:{response}') - ret = response.choices[0].text - else: - # print(messages) - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=messages, - temperature=0.7, - max_tokens=max_output_tokens, - top_p=1, - frequency_penalty=0, - presence_penalty=0, - stop=[" Human:", " AI:"] - ) - # print(f'response_3.5__:{response}') - ret = response.choices[0].message['content'] - if ret.startswith("\n\n"): - ret = ret.replace("\n\n", '') - ret = ret.replace('\n', '
      ') - if ret == '': - ret = f"Openai said: I'm too tired." - return ret, response.usage - except Exception as e: - logger.info(f"openai_create_error__{e}") - ret = f"Openai said: {e} Perhaps enter your OpenAI API key." - return ret, {"completion_tokens": -1, "prompt_tokens": -1, "total_tokens": -1} - - print(f'chat_history = {chat_history}') - chat_history_list = [] - chat_history = chat_history.replace("

      ", "").replace("

      ", "") - if chat_history != '': - chat_history_list = json.loads(chat_history) - chat_history_list.append(f'☟:{input}') - - output, response_usage = openai_create(chat_history_list, model_radio) - print(f'response_usage={response_usage}') - return output - -def chat(input0, input1, chat_radio, model_radio, all_chat_history, chat_history): - all_chat = [] - if all_chat_history != '': - all_chat = json.loads(all_chat_history) - - if len(input0) == 0: - return all_chat, json.dumps(all_chat), input0, input1 - - if chat_radio == "Talk to chatGPT": - response = get_response_from_openai(input0, chat_history, model_radio) - all_chat.append((input0, response)) - return all_chat, json.dumps(all_chat), '', input1 - else: - prompt_en = getTextTrans(input0, source='zh', target='en') + f',{random.randint(0,sys.maxsize)}' - return all_chat, json.dumps(all_chat), input0, prompt_en - -def chat_radio_change(chat_radio): - if chat_radio == "Talk to chatGPT": - return gr.Radio.update(visible=True), gr.Text.update(visible=True) - else: - return gr.Radio.update(visible=False), gr.Text.update(visible=False) - -with gr.Blocks(title='Talk to chatGPT') as demo: - with gr.Row(elem_id="page_0", visible=False) as page_0: - gr.HTML("

      You can duplicating this space and use your own session token: Duplicate Space

      ") - with gr.Group(elem_id="page_1", visible=True) as page_1: - with gr.Box(): - with gr.Row(): - start_button = gr.Button("Let's talk to chatGPT!", elem_id="start-btn", visible=True) - start_button.click(fn=None, inputs=[], outputs=[], _js=start_work) - - with gr.Row(elem_id="page_2", visible=False) as page_2: - with gr.Row(elem_id="chat_row"): - chatbot = gr.Chatbot(elem_id="chat_bot", visible=False).style(color_map=("green", "blue")) - chatbot1 = gr.Chatbot(elem_id="chat_bot1").style(color_map=("green", "blue")) - with gr.Row(elem_id="prompt_row"): - prompt_input0 = gr.Textbox(lines=2, label="input", elem_id="my_prompt", show_label=True) - prompt_input1 = gr.Textbox(lines=4, label="prompt", elem_id="my_prompt_en", visible=False) - chat_history = gr.Textbox(lines=4, label="chat_history", elem_id="chat_history", visible=False) - all_chat_history = gr.Textbox(lines=4, label="会话上下文:", elem_id="all_chat_history", visible=False) - - chat_radio = gr.Radio(["Talk to chatGPT", "Text to Image"], elem_id="chat_radio",value="Talk to chatGPT", show_label=False, visible=True) - model_radio = gr.Radio(["GPT-3.0", "GPT-3.5"], elem_id="model_radio", value="GPT-3.5", - label='GPT model: ', show_label=True,interactive=True, visible=True) - openai_api_key_textbox = gr.Textbox(placeholder="Paste your OpenAI API key (sk-...) and hit Enter", - show_label=False, lines=1, type='password') - with gr.Row(elem_id="btns_row"): - with gr.Column(id="submit_col"): - submit_btn = gr.Button(value = "submit",elem_id="submit-btn").style( - margin=True, - rounded=(True, True, True, True), - width=100 - ) - with gr.Column(id="clear_col"): - clear_btn = gr.Button(value = "clear outputs", elem_id="clear-btn").style( - margin=True, - rounded=(True, True, True, True), - width=100 - ) - submit_btn.click(fn=chat, - inputs=[prompt_input0, prompt_input1, chat_radio, model_radio, all_chat_history, chat_history], - outputs=[chatbot, all_chat_history, prompt_input0, prompt_input1], - ) - with gr.Row(elem_id='tab_img', visible=False).style(height=5): - tab_img = gr.TabbedInterface(tab_actions, tab_titles) - - openai_api_key_textbox.change(set_openai_api_key, - inputs=[openai_api_key_textbox], - outputs=[]) - openai_api_key_textbox.submit(set_openai_api_key, - inputs=[openai_api_key_textbox], - outputs=[]) - chat_radio.change(fn=chat_radio_change, - inputs=[chat_radio], - outputs=[model_radio, openai_api_key_textbox], - ) - -demo.launch(debug = True) diff --git a/spaces/Flux9665/IMS-Toucan/Layers/Convolution.py b/spaces/Flux9665/IMS-Toucan/Layers/Convolution.py deleted file mode 100644 index e6e56e85d5908b0db5fceaea1e701d197a824d4b..0000000000000000000000000000000000000000 --- a/spaces/Flux9665/IMS-Toucan/Layers/Convolution.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright 2020 Johns Hopkins University (Shinji Watanabe) -# Northwestern Polytechnical University (Pengcheng Guo) -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) -# Adapted by Florian Lux 2021 - - -from torch import nn - - -class ConvolutionModule(nn.Module): - """ - ConvolutionModule in Conformer model. - - Args: - channels (int): The number of channels of conv layers. - kernel_size (int): Kernel size of conv layers. - - """ - - def __init__(self, channels, kernel_size, activation=nn.ReLU(), bias=True): - super(ConvolutionModule, self).__init__() - # kernel_size should be an odd number for 'SAME' padding - assert (kernel_size - 1) % 2 == 0 - - self.pointwise_conv1 = nn.Conv1d(channels, 2 * channels, kernel_size=1, stride=1, padding=0, bias=bias, ) - self.depthwise_conv = nn.Conv1d(channels, channels, kernel_size, stride=1, padding=(kernel_size - 1) // 2, groups=channels, bias=bias, ) - self.norm = nn.GroupNorm(num_groups=32, num_channels=channels) - self.pointwise_conv2 = nn.Conv1d(channels, channels, kernel_size=1, stride=1, padding=0, bias=bias, ) - self.activation = activation - - def forward(self, x): - """ - Compute convolution module. - - Args: - x (torch.Tensor): Input tensor (#batch, time, channels). - - Returns: - torch.Tensor: Output tensor (#batch, time, channels). - - """ - # exchange the temporal dimension and the feature dimension - x = x.transpose(1, 2) - - # GLU mechanism - x = self.pointwise_conv1(x) # (batch, 2*channel, dim) - x = nn.functional.glu(x, dim=1) # (batch, channel, dim) - - # 1D Depthwise Conv - x = self.depthwise_conv(x) - x = self.activation(self.norm(x)) - - x = self.pointwise_conv2(x) - - return x.transpose(1, 2) diff --git a/spaces/FrankZxShen/vits-fast-fineturning-models-ba/commons.py b/spaces/FrankZxShen/vits-fast-fineturning-models-ba/commons.py deleted file mode 100644 index db17cf0914ba6e445fe613e3ec3411b3a74b28aa..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/vits-fast-fineturning-models-ba/commons.py +++ /dev/null @@ -1,164 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - try: - ret[i] = x[i, :, idx_str:idx_end] - except RuntimeError: - print("?") - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/Giuvyz/rvc-genshin/infer_pack/modules.py b/spaces/Giuvyz/rvc-genshin/infer_pack/modules.py deleted file mode 100644 index 960481cedad9a6106f2bf0b9e86e82b120f7b33f..0000000000000000000000000000000000000000 --- a/spaces/Giuvyz/rvc-genshin/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/relax/utils.py b/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/relax/utils.py deleted file mode 100644 index 4bd4acad4e5e7f071fb0ad98e7711b4856843464..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/protGPT2_gradioFold/alphafold/alphafold/relax/utils.py +++ /dev/null @@ -1,80 +0,0 @@ -# Copyright 2021 DeepMind Technologies Limited -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Utils for minimization.""" -import io -from alphafold.common import residue_constants -from Bio import PDB -import numpy as np -from simtk.openmm import app as openmm_app -from simtk.openmm.app.internal.pdbstructure import PdbStructure - - -def overwrite_pdb_coordinates(pdb_str: str, pos) -> str: - pdb_file = io.StringIO(pdb_str) - structure = PdbStructure(pdb_file) - topology = openmm_app.PDBFile(structure).getTopology() - with io.StringIO() as f: - openmm_app.PDBFile.writeFile(topology, pos, f) - return f.getvalue() - - -def overwrite_b_factors(pdb_str: str, bfactors: np.ndarray) -> str: - """Overwrites the B-factors in pdb_str with contents of bfactors array. - - Args: - pdb_str: An input PDB string. - bfactors: A numpy array with shape [1, n_residues, 37]. We assume that the - B-factors are per residue; i.e. that the nonzero entries are identical in - [0, i, :]. - - Returns: - A new PDB string with the B-factors replaced. - """ - if bfactors.shape[-1] != residue_constants.atom_type_num: - raise ValueError( - f'Invalid final dimension size for bfactors: {bfactors.shape[-1]}.') - - parser = PDB.PDBParser(QUIET=True) - handle = io.StringIO(pdb_str) - structure = parser.get_structure('', handle) - - curr_resid = ('', '', '') - idx = -1 - for atom in structure.get_atoms(): - atom_resid = atom.parent.get_id() - if atom_resid != curr_resid: - idx += 1 - if idx >= bfactors.shape[0]: - raise ValueError('Index into bfactors exceeds number of residues. ' - 'B-factors shape: {shape}, idx: {idx}.') - curr_resid = atom_resid - atom.bfactor = bfactors[idx, residue_constants.atom_order['CA']] - - new_pdb = io.StringIO() - pdb_io = PDB.PDBIO() - pdb_io.set_structure(structure) - pdb_io.save(new_pdb) - return new_pdb.getvalue() - - -def assert_equal_nonterminal_atom_types( - atom_mask: np.ndarray, ref_atom_mask: np.ndarray): - """Checks that pre- and post-minimized proteins have same atom set.""" - # Ignore any terminal OXT atoms which may have been added by minimization. - oxt = residue_constants.atom_order['OXT'] - no_oxt_mask = np.ones(shape=atom_mask.shape, dtype=np.bool) - no_oxt_mask[..., oxt] = False - np.testing.assert_almost_equal(ref_atom_mask[no_oxt_mask], - atom_mask[no_oxt_mask]) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/paa/paa_r50_fpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/paa/paa_r50_fpn_1x_coco.py deleted file mode 100644 index cd844108216c16801c0875723d589c5b11fb7b8d..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/paa/paa_r50_fpn_1x_coco.py +++ /dev/null @@ -1,70 +0,0 @@ -_base_ = [ - '../_base_/datasets/coco_detection.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] -model = dict( - type='PAA', - pretrained='torchvision://resnet50', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - start_level=1, - add_extra_convs='on_output', - num_outs=5), - bbox_head=dict( - type='PAAHead', - reg_decoded_bbox=True, - score_voting=True, - topk=9, - num_classes=80, - in_channels=256, - stacked_convs=4, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - ratios=[1.0], - octave_base_scale=8, - scales_per_octave=1, - strides=[8, 16, 32, 64, 128]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[0.1, 0.1, 0.2, 0.2]), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='GIoULoss', loss_weight=1.3), - loss_centerness=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=0.5)), - # training and testing settings - train_cfg=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.1, - neg_iou_thr=0.1, - min_pos_iou=0, - ignore_iof_thr=-1), - allowed_border=-1, - pos_weight=-1, - debug=False), - test_cfg=dict( - nms_pre=1000, - min_bbox_size=0, - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.6), - max_per_img=100)) -# optimizer -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/nonlocal_net/nonlocal_r50-d8_769x769_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/nonlocal_net/nonlocal_r50-d8_769x769_40k_cityscapes.py deleted file mode 100644 index 75adef324877d56c157b457eecbf8446aa6b192f..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/nonlocal_net/nonlocal_r50-d8_769x769_40k_cityscapes.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = [ - '../_base_/models/nonlocal_r50-d8.py', - '../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py', - '../_base_/schedules/schedule_40k.py' -] -model = dict( - decode_head=dict(align_corners=True), - auxiliary_head=dict(align_corners=True), - test_cfg=dict(mode='slide', crop_size=(769, 769), stride=(513, 513))) diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/FastDemo/YuyuanQA.py b/spaces/HaloMaster/chinesesummary/fengshen/examples/FastDemo/YuyuanQA.py deleted file mode 100644 index fed2d19bc61e0735f3868e1a30a532bd19fbb4b0..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/examples/FastDemo/YuyuanQA.py +++ /dev/null @@ -1,71 +0,0 @@ -import requests -import langid -import streamlit as st -from translate import baiduTranslatorMedical -from translate import baiduTranslator - -langid.set_languages(['en', 'zh']) -lang_dic = {'zh': 'en', 'en': 'zh'} - -st.set_page_config( - page_title="余元医疗问答", - page_icon=":shark:", - # layout="wide", - initial_sidebar_state="expanded", - menu_items={ - 'Get Help': 'https://www.extremelycoolapp.com/help', - 'Report a bug': "https://www.extremelycoolapp.com/bug", - 'About': "# This is a header. This is an *extremely* cool app!" - } -) -st.title('Demo for MedicalQA') - - -st.sidebar.header("参数配置") -sbform = st.sidebar.form("固定参数设置") -n_sample = sbform.slider("设置返回条数", min_value=1, max_value=10, value=3) -text_length = sbform.slider('生成长度:', min_value=32, max_value=512, value=64, step=32) -text_level = sbform.slider('文本多样性:', min_value=0.1, max_value=1.0, value=0.9, step=0.1) -model_id = sbform.number_input('选择模型号:', min_value=0, max_value=13, value=13, step=1) -trans = sbform.selectbox('选择翻译内核', ['百度通用', '医疗生物']) -sbform.form_submit_button("配置") - - -form = st.form("参数设置") -input_text = form.text_input('请输入你的问题:', value='', placeholder='例如:糖尿病的症状有哪些?') -if trans == '百度通用': - translator = 'baidu_common' -else: - translator = 'baidu' -if input_text: - lang = langid.classify(input_text)[0] - if translator == 'baidu': - st.write('**你的问题是:**', baiduTranslatorMedical(input_text, src=lang, dest=lang_dic[lang]).text) - else: - st.write('**你的问题是:**', baiduTranslator(input_text, src=lang, dest=lang_dic[lang]).text) - -form.form_submit_button("提交") - -# @st.cache(suppress_st_warning=True) - - -def generate_qa(input_text, n_sample, model_id='7', length=64, translator='baidu', level=0.7): - # st.write('调用了generate函数') - URL = 'http://192.168.190.63:6605/qa' - data = {"text": input_text, "n_sample": n_sample, "model_id": model_id, - "length": length, 'translator': translator, 'level': level} - r = requests.get(URL, params=data) - return r.text -# my_bar = st.progress(80) - - -with st.spinner('老夫正在思考中🤔...'): - if input_text: - results = generate_qa(input_text, n_sample, model_id=str(model_id), - translator=translator, length=text_length, level=text_level) - for idx, item in enumerate(eval(results), start=1): - st.markdown(f""" - **候选回答「{idx}」:**\n - """) - st.info('中文:%s' % item['fy_next_sentence']) - st.info('英文:%s' % item['next_sentence']) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/benchmark/dummy_model.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/benchmark/dummy_model.py deleted file mode 100644 index ff26e4fe655d8e8d7f9942c4bd3df7cd267405fb..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/benchmark/dummy_model.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.nn as nn -import torch.nn.functional as F -from fairseq.data import Dictionary -from fairseq.models import ( - FairseqDecoder, - FairseqLanguageModel, - register_model, - register_model_architecture, -) - - -@register_model("dummy_model") -class DummyModel(FairseqLanguageModel): - def __init__(self, args, encoder): - super().__init__(encoder) - self.args = args - - @staticmethod - def add_args(parser): - parser.add_argument("--num-layers", type=int, default=24) - parser.add_argument("--embed-dim", type=int, default=1024) - - @classmethod - def build_model(cls, args, task): - encoder = DummyEncoder( - num_embed=len(task.target_dictionary), - embed_dim=args.embed_dim, - num_layers=args.num_layers, - ) - return cls(args, encoder) - - def forward(self, src_tokens, masked_tokens=None, **kwargs): - return self.decoder(src_tokens, masked_tokens=masked_tokens) - - -class DummyEncoder(FairseqDecoder): - def __init__(self, num_embed=50000, embed_dim=1024, num_layers=24): - super().__init__(Dictionary()) - self.embed = nn.Embedding( - num_embeddings=num_embed, embedding_dim=embed_dim, padding_idx=0 - ) - self.layers_a = nn.ModuleList( - [ - nn.Sequential( - nn.LayerNorm(embed_dim), - nn.Linear(embed_dim, 3 * embed_dim), # q, k, v input projection - nn.Linear(3 * embed_dim, embed_dim), # skip self-attention - nn.Linear(embed_dim, embed_dim), # output projection - nn.Dropout(), - ) - for i in range(num_layers) - ] - ) - self.layers_b = nn.ModuleList( - [ - nn.Sequential( - nn.LayerNorm(embed_dim), - nn.Linear(embed_dim, 4 * embed_dim), # FFN - nn.ReLU(), - nn.Linear(4 * embed_dim, embed_dim), # FFN - nn.Dropout(0.1), - ) - for i in range(num_layers) - ] - ) - self.out_proj = nn.Linear(embed_dim, num_embed) - - def forward(self, tokens, masked_tokens=None): - x = self.embed(tokens) - for layer_a, layer_b in zip(self.layers_a, self.layers_b): - x = x + layer_a(x) - x = x + layer_b(x) - x = self.out_proj(x) - if masked_tokens is not None: - x = x[masked_tokens] - return (x,) - - def max_positions(self): - return 1024 - - def get_normalized_probs(self, net_output, log_probs, sample=None): - logits = net_output[0].float() - if log_probs: - return F.log_softmax(logits, dim=-1) - else: - return F.softmax(logits, dim=-1) - - -@register_model_architecture("dummy_model", "dummy_model") -def base_architecture(args): - pass diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/nat/nat_crf_transformer.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/nat/nat_crf_transformer.py deleted file mode 100644 index d4b3cd931ceb077eb30db73df1d5d6cd714a86c2..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/models/nat/nat_crf_transformer.py +++ /dev/null @@ -1,121 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -from fairseq.models import register_model, register_model_architecture -from fairseq.models.nat import NATransformerModel, base_architecture -from fairseq.modules import DynamicCRF - - -@register_model("nacrf_transformer") -class NACRFTransformerModel(NATransformerModel): - def __init__(self, args, encoder, decoder): - super().__init__(args, encoder, decoder) - self.crf_layer = DynamicCRF( - num_embedding=len(self.tgt_dict), - low_rank=args.crf_lowrank_approx, - beam_size=args.crf_beam_approx, - ) - - @property - def allow_ensemble(self): - return False - - @staticmethod - def add_args(parser): - NATransformerModel.add_args(parser) - parser.add_argument( - "--crf-lowrank-approx", - type=int, - help="the dimension of low-rank approximation of transition", - ) - parser.add_argument( - "--crf-beam-approx", - type=int, - help="the beam size for apporixmating the normalizing factor", - ) - parser.add_argument( - "--word-ins-loss-factor", - type=float, - help="weights on NAT loss used to co-training with CRF loss.", - ) - - def forward( - self, src_tokens, src_lengths, prev_output_tokens, tgt_tokens, **kwargs - ): - # encoding - encoder_out = self.encoder(src_tokens, src_lengths=src_lengths, **kwargs) - - # length prediction - length_out = self.decoder.forward_length( - normalize=False, encoder_out=encoder_out - ) - length_tgt = self.decoder.forward_length_prediction( - length_out, encoder_out, tgt_tokens - ) - - # decoding - word_ins_out = self.decoder( - normalize=False, - prev_output_tokens=prev_output_tokens, - encoder_out=encoder_out, - ) - word_ins_tgt, word_ins_mask = tgt_tokens, tgt_tokens.ne(self.pad) - - # compute the log-likelihood of CRF - crf_nll = -self.crf_layer(word_ins_out, word_ins_tgt, word_ins_mask) - crf_nll = (crf_nll / word_ins_mask.type_as(crf_nll).sum(-1)).mean() - - return { - "word_ins": { - "out": word_ins_out, - "tgt": word_ins_tgt, - "mask": word_ins_mask, - "ls": self.args.label_smoothing, - "nll_loss": True, - "factor": self.args.word_ins_loss_factor, - }, - "word_crf": {"loss": crf_nll}, - "length": { - "out": length_out, - "tgt": length_tgt, - "factor": self.decoder.length_loss_factor, - }, - } - - def forward_decoder(self, decoder_out, encoder_out, decoding_format=None, **kwargs): - output_tokens = decoder_out.output_tokens - output_scores = decoder_out.output_scores - history = decoder_out.history - - # execute the decoder and get emission scores - output_masks = output_tokens.ne(self.pad) - word_ins_out = self.decoder( - normalize=False, prev_output_tokens=output_tokens, encoder_out=encoder_out - ) - - # run viterbi decoding through CRF - _scores, _tokens = self.crf_layer.forward_decoder(word_ins_out, output_masks) - output_tokens.masked_scatter_(output_masks, _tokens[output_masks]) - output_scores.masked_scatter_(output_masks, _scores[output_masks]) - if history is not None: - history.append(output_tokens.clone()) - - return decoder_out._replace( - output_tokens=output_tokens, - output_scores=output_scores, - attn=None, - history=history, - ) - - -@register_model_architecture("nacrf_transformer", "nacrf_transformer") -def nacrf_base_architecture(args): - args.crf_lowrank_approx = getattr(args, "crf_lowrank_approx", 32) - args.crf_beam_approx = getattr(args, "crf_beam_approx", 64) - args.word_ins_loss_factor = getattr(args, "word_ins_loss_factor", 0.5) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", True) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", True) - base_architecture(args) diff --git a/spaces/Hexamind/GDOC/src/tools/list_tool.py b/spaces/Hexamind/GDOC/src/tools/list_tool.py deleted file mode 100644 index 98b4fc59005ac97c1c628e97540ec54d13306a79..0000000000000000000000000000000000000000 --- a/spaces/Hexamind/GDOC/src/tools/list_tool.py +++ /dev/null @@ -1,13 +0,0 @@ -def keep_last_occurrences(lst): - last_occurrences = {} - result = [] - - for index, string in lst: - last_occurrences[index] = string - - for index, string in lst: - if last_occurrences[index] == string: - result.append((index, string)) - last_occurrences[index] = None - - return result \ No newline at end of file diff --git a/spaces/HuggingFaceH4/open_llm_leaderboard/src/get_model_info/hardocded_metadata/types.py b/spaces/HuggingFaceH4/open_llm_leaderboard/src/get_model_info/hardocded_metadata/types.py deleted file mode 100644 index 75eeafe901ef44c68dbaa96207e52eed490d080d..0000000000000000000000000000000000000000 --- a/spaces/HuggingFaceH4/open_llm_leaderboard/src/get_model_info/hardocded_metadata/types.py +++ /dev/null @@ -1,555 +0,0 @@ -from dataclasses import dataclass -from enum import Enum -from typing import Dict - - -@dataclass -class ModelInfo: - name: str - symbol: str # emoji - - -class ModelType(Enum): - PT = ModelInfo(name="pretrained", symbol="🟢") - FT = ModelInfo(name="fine-tuned", symbol="🔶") - IFT = ModelInfo(name="instruction-tuned", symbol="⭕") - RL = ModelInfo(name="RL-tuned", symbol="🟦") - Unknown = ModelInfo(name="", symbol="?") - - def to_str(self, separator=" "): - return f"{self.value.symbol}{separator}{self.value.name}" - - -MODEL_TYPE_METADATA: Dict[str, ModelType] = { - "tiiuae/falcon-180B": ModelType.PT, - "tiiuae/falcon-180B-chat": ModelType.RL, - "microsoft/phi-1_5": ModelType.PT, - "Qwen/Qwen-7B": ModelType.PT, - "Qwen/Qwen-7B-Chat": ModelType.RL, - "notstoic/PygmalionCoT-7b": ModelType.IFT, - "aisquared/dlite-v1-355m": ModelType.IFT, - "aisquared/dlite-v1-1_5b": ModelType.IFT, - "aisquared/dlite-v1-774m": ModelType.IFT, - "aisquared/dlite-v1-124m": ModelType.IFT, - "aisquared/chopt-2_7b": ModelType.IFT, - "aisquared/dlite-v2-124m": ModelType.IFT, - "aisquared/dlite-v2-774m": ModelType.IFT, - "aisquared/dlite-v2-1_5b": ModelType.IFT, - "aisquared/chopt-1_3b": ModelType.IFT, - "aisquared/dlite-v2-355m": ModelType.IFT, - "augtoma/qCammel-13": ModelType.IFT, - "Aspik101/Llama-2-7b-hf-instruct-pl-lora_unload": ModelType.IFT, - "Aspik101/vicuna-7b-v1.3-instruct-pl-lora_unload": ModelType.IFT, - "TheBloke/alpaca-lora-65B-HF": ModelType.FT, - "TheBloke/tulu-7B-fp16": ModelType.IFT, - "TheBloke/guanaco-7B-HF": ModelType.FT, - "TheBloke/koala-7B-HF": ModelType.FT, - "TheBloke/wizardLM-7B-HF": ModelType.IFT, - "TheBloke/airoboros-13B-HF": ModelType.IFT, - "TheBloke/koala-13B-HF": ModelType.FT, - "TheBloke/Wizard-Vicuna-7B-Uncensored-HF": ModelType.FT, - "TheBloke/dromedary-65b-lora-HF": ModelType.IFT, - "TheBloke/wizardLM-13B-1.0-fp16": ModelType.IFT, - "TheBloke/WizardLM-13B-V1-1-SuperHOT-8K-fp16": ModelType.FT, - "TheBloke/Wizard-Vicuna-30B-Uncensored-fp16": ModelType.FT, - "TheBloke/wizard-vicuna-13B-HF": ModelType.IFT, - "TheBloke/UltraLM-13B-fp16": ModelType.IFT, - "TheBloke/OpenAssistant-FT-7-Llama-30B-HF": ModelType.FT, - "TheBloke/vicuna-13B-1.1-HF": ModelType.IFT, - "TheBloke/guanaco-13B-HF": ModelType.FT, - "TheBloke/guanaco-65B-HF": ModelType.FT, - "TheBloke/airoboros-7b-gpt4-fp16": ModelType.IFT, - "TheBloke/llama-30b-supercot-SuperHOT-8K-fp16": ModelType.IFT, - "TheBloke/Llama-2-13B-fp16": ModelType.PT, - "TheBloke/llama-2-70b-Guanaco-QLoRA-fp16": ModelType.FT, - "TheBloke/landmark-attention-llama7b-fp16": ModelType.IFT, - "TheBloke/Planner-7B-fp16": ModelType.IFT, - "TheBloke/Wizard-Vicuna-13B-Uncensored-HF": ModelType.FT, - "TheBloke/gpt4-alpaca-lora-13B-HF": ModelType.IFT, - "TheBloke/gpt4-x-vicuna-13B-HF": ModelType.IFT, - "TheBloke/gpt4-alpaca-lora_mlp-65B-HF": ModelType.IFT, - "TheBloke/tulu-13B-fp16": ModelType.IFT, - "TheBloke/VicUnlocked-alpaca-65B-QLoRA-fp16": ModelType.IFT, - "TheBloke/Llama-2-70B-fp16": ModelType.IFT, - "TheBloke/WizardLM-30B-fp16": ModelType.IFT, - "TheBloke/robin-13B-v2-fp16": ModelType.FT, - "TheBloke/robin-33B-v2-fp16": ModelType.FT, - "TheBloke/Vicuna-13B-CoT-fp16": ModelType.IFT, - "TheBloke/Vicuna-33B-1-3-SuperHOT-8K-fp16": ModelType.IFT, - "TheBloke/Wizard-Vicuna-30B-Superhot-8K-fp16": ModelType.FT, - "TheBloke/Nous-Hermes-13B-SuperHOT-8K-fp16": ModelType.IFT, - "TheBloke/GPlatty-30B-SuperHOT-8K-fp16": ModelType.FT, - "TheBloke/CAMEL-33B-Combined-Data-SuperHOT-8K-fp16": ModelType.IFT, - "TheBloke/Chinese-Alpaca-33B-SuperHOT-8K-fp16": ModelType.IFT, - "jphme/orca_mini_v2_ger_7b": ModelType.IFT, - "Ejafa/vicuna_7B_vanilla_1.1": ModelType.FT, - "kevinpro/Vicuna-13B-CoT": ModelType.IFT, - "AlekseyKorshuk/pygmalion-6b-vicuna-chatml": ModelType.FT, - "AlekseyKorshuk/chatml-pyg-v1": ModelType.FT, - "concedo/Vicuzard-30B-Uncensored": ModelType.FT, - "concedo/OPT-19M-ChatSalad": ModelType.FT, - "concedo/Pythia-70M-ChatSalad": ModelType.FT, - "digitous/13B-HyperMantis": ModelType.IFT, - "digitous/Adventien-GPTJ": ModelType.FT, - "digitous/Alpacino13b": ModelType.IFT, - "digitous/GPT-R": ModelType.IFT, - "digitous/Javelin-R": ModelType.IFT, - "digitous/Javalion-GPTJ": ModelType.IFT, - "digitous/Javalion-R": ModelType.IFT, - "digitous/Skegma-GPTJ": ModelType.FT, - "digitous/Alpacino30b": ModelType.IFT, - "digitous/Janin-GPTJ": ModelType.FT, - "digitous/Janin-R": ModelType.FT, - "digitous/Javelin-GPTJ": ModelType.FT, - "SaylorTwift/gpt2_test": ModelType.PT, - "anton-l/gpt-j-tiny-random": ModelType.FT, - "Andron00e/YetAnother_Open-Llama-3B-LoRA-OpenOrca": ModelType.FT, - "Lazycuber/pyg-instruct-wizardlm": ModelType.FT, - "Lazycuber/Janemalion-6B": ModelType.FT, - "IDEA-CCNL/Ziya-LLaMA-13B-Pretrain-v1": ModelType.FT, - "IDEA-CCNL/Ziya-LLaMA-13B-v1": ModelType.IFT, - "dsvv-cair/alpaca-cleaned-llama-30b-bf16": ModelType.FT, - "gpt2-medium": ModelType.PT, - "camel-ai/CAMEL-13B-Combined-Data": ModelType.IFT, - "camel-ai/CAMEL-13B-Role-Playing-Data": ModelType.FT, - "camel-ai/CAMEL-33B-Combined-Data": ModelType.IFT, - "PygmalionAI/pygmalion-6b": ModelType.FT, - "PygmalionAI/metharme-1.3b": ModelType.IFT, - "PygmalionAI/pygmalion-1.3b": ModelType.FT, - "PygmalionAI/pygmalion-350m": ModelType.FT, - "PygmalionAI/pygmalion-2.7b": ModelType.FT, - "medalpaca/medalpaca-7b": ModelType.FT, - "lilloukas/Platypus-30B": ModelType.IFT, - "lilloukas/GPlatty-30B": ModelType.FT, - "mncai/chatdoctor": ModelType.FT, - "chaoyi-wu/MedLLaMA_13B": ModelType.FT, - "LoupGarou/WizardCoder-Guanaco-15B-V1.0": ModelType.IFT, - "LoupGarou/WizardCoder-Guanaco-15B-V1.1": ModelType.FT, - "hakurei/instruct-12b": ModelType.IFT, - "hakurei/lotus-12B": ModelType.FT, - "shibing624/chinese-llama-plus-13b-hf": ModelType.IFT, - "shibing624/chinese-alpaca-plus-7b-hf": ModelType.IFT, - "shibing624/chinese-alpaca-plus-13b-hf": ModelType.IFT, - "mosaicml/mpt-7b-instruct": ModelType.IFT, - "mosaicml/mpt-30b-chat": ModelType.IFT, - "mosaicml/mpt-7b-storywriter": ModelType.FT, - "mosaicml/mpt-30b-instruct": ModelType.IFT, - "mosaicml/mpt-7b-chat": ModelType.IFT, - "mosaicml/mpt-30b": ModelType.PT, - "Corianas/111m": ModelType.IFT, - "Corianas/Quokka_1.3b": ModelType.IFT, - "Corianas/256_5epoch": ModelType.FT, - "Corianas/Quokka_256m": ModelType.IFT, - "Corianas/Quokka_590m": ModelType.IFT, - "Corianas/gpt-j-6B-Dolly": ModelType.FT, - "Corianas/Quokka_2.7b": ModelType.IFT, - "cyberagent/open-calm-7b": ModelType.FT, - "Aspik101/Nous-Hermes-13b-pl-lora_unload": ModelType.IFT, - "THUDM/chatglm2-6b": ModelType.IFT, - "MetaIX/GPT4-X-Alpasta-30b": ModelType.IFT, - "NYTK/PULI-GPTrio": ModelType.PT, - "EleutherAI/pythia-1.3b": ModelType.PT, - "EleutherAI/pythia-2.8b-deduped": ModelType.PT, - "EleutherAI/gpt-neo-125m": ModelType.PT, - "EleutherAI/pythia-160m": ModelType.PT, - "EleutherAI/gpt-neo-2.7B": ModelType.PT, - "EleutherAI/pythia-1b-deduped": ModelType.PT, - "EleutherAI/pythia-6.7b": ModelType.PT, - "EleutherAI/pythia-70m-deduped": ModelType.PT, - "EleutherAI/gpt-neox-20b": ModelType.PT, - "EleutherAI/pythia-1.4b-deduped": ModelType.PT, - "EleutherAI/pythia-2.7b": ModelType.PT, - "EleutherAI/pythia-6.9b-deduped": ModelType.PT, - "EleutherAI/pythia-70m": ModelType.PT, - "EleutherAI/gpt-j-6b": ModelType.PT, - "EleutherAI/pythia-12b-deduped": ModelType.PT, - "EleutherAI/gpt-neo-1.3B": ModelType.PT, - "EleutherAI/pythia-410m-deduped": ModelType.PT, - "EleutherAI/pythia-160m-deduped": ModelType.PT, - "EleutherAI/polyglot-ko-12.8b": ModelType.PT, - "EleutherAI/pythia-12b": ModelType.PT, - "roneneldan/TinyStories-33M": ModelType.PT, - "roneneldan/TinyStories-28M": ModelType.PT, - "roneneldan/TinyStories-1M": ModelType.PT, - "roneneldan/TinyStories-8M": ModelType.PT, - "roneneldan/TinyStories-3M": ModelType.PT, - "jerryjalapeno/nart-100k-7b": ModelType.FT, - "lmsys/vicuna-13b-v1.3": ModelType.IFT, - "lmsys/vicuna-7b-v1.3": ModelType.IFT, - "lmsys/vicuna-13b-v1.1": ModelType.IFT, - "lmsys/vicuna-13b-delta-v1.1": ModelType.IFT, - "lmsys/vicuna-7b-delta-v1.1": ModelType.IFT, - "abhiramtirumala/DialoGPT-sarcastic-medium": ModelType.FT, - "haonan-li/bactrian-x-llama-13b-merged": ModelType.IFT, - "Gryphe/MythoLogic-13b": ModelType.IFT, - "Gryphe/MythoBoros-13b": ModelType.IFT, - "pillowtalks-ai/delta13b": ModelType.FT, - "wannaphong/openthaigpt-0.1.0-beta-full-model_for_open_llm_leaderboard": ModelType.FT, - "bigscience/bloom-7b1": ModelType.PT, - "bigcode/tiny_starcoder_py": ModelType.PT, - "bigcode/starcoderplus": ModelType.FT, - "bigcode/gpt_bigcode-santacoder": ModelType.PT, - "bigcode/starcoder": ModelType.PT, - "Open-Orca/OpenOrca-Preview1-13B": ModelType.IFT, - "microsoft/DialoGPT-large": ModelType.FT, - "microsoft/DialoGPT-small": ModelType.FT, - "microsoft/DialoGPT-medium": ModelType.FT, - "microsoft/CodeGPT-small-py": ModelType.FT, - "Tincando/fiction_story_generator": ModelType.FT, - "Pirr/pythia-13b-deduped-green_devil": ModelType.FT, - "Aeala/GPT4-x-AlpacaDente2-30b": ModelType.FT, - "Aeala/GPT4-x-AlpacaDente-30b": ModelType.FT, - "Aeala/GPT4-x-Alpasta-13b": ModelType.FT, - "Aeala/VicUnlocked-alpaca-30b": ModelType.IFT, - "Tap-M/Luna-AI-Llama2-Uncensored": ModelType.FT, - "illuin/test-custom-llama": ModelType.FT, - "dvruette/oasst-llama-13b-2-epochs": ModelType.FT, - "dvruette/oasst-gpt-neox-20b-1000-steps": ModelType.FT, - "dvruette/llama-13b-pretrained-dropout": ModelType.PT, - "dvruette/llama-13b-pretrained": ModelType.PT, - "dvruette/llama-13b-pretrained-sft-epoch-1": ModelType.FT, - "dvruette/llama-13b-pretrained-sft-do2": ModelType.FT, - "dvruette/oasst-gpt-neox-20b-3000-steps": ModelType.FT, - "dvruette/oasst-pythia-12b-pretrained-sft": ModelType.FT, - "dvruette/oasst-pythia-6.9b-4000-steps": ModelType.FT, - "dvruette/gpt-neox-20b-full-precision": ModelType.FT, - "dvruette/oasst-llama-13b-1000-steps": ModelType.FT, - "openlm-research/open_llama_7b_700bt_preview": ModelType.PT, - "openlm-research/open_llama_7b": ModelType.PT, - "openlm-research/open_llama_7b_v2": ModelType.PT, - "openlm-research/open_llama_3b": ModelType.PT, - "openlm-research/open_llama_13b": ModelType.PT, - "openlm-research/open_llama_3b_v2": ModelType.PT, - "PocketDoc/Dans-PileOfSets-Mk1-llama-13b-merged": ModelType.IFT, - "GeorgiaTechResearchInstitute/galpaca-30b": ModelType.IFT, - "GeorgiaTechResearchInstitute/starcoder-gpteacher-code-instruct": ModelType.IFT, - "databricks/dolly-v2-7b": ModelType.IFT, - "databricks/dolly-v2-3b": ModelType.IFT, - "databricks/dolly-v2-12b": ModelType.IFT, - "Rachneet/gpt2-xl-alpaca": ModelType.FT, - "Locutusque/gpt2-conversational-or-qa": ModelType.FT, - "psyche/kogpt": ModelType.FT, - "NbAiLab/nb-gpt-j-6B-alpaca": ModelType.IFT, - "Mikael110/llama-2-7b-guanaco-fp16": ModelType.FT, - "Mikael110/llama-2-13b-guanaco-fp16": ModelType.FT, - "Fredithefish/CrimsonPajama": ModelType.IFT, - "Fredithefish/RedPajama-INCITE-Chat-3B-ShareGPT-11K": ModelType.FT, - "Fredithefish/ScarletPajama-3B-HF": ModelType.FT, - "Fredithefish/RedPajama-INCITE-Chat-3B-Instruction-Tuning-with-GPT-4": ModelType.IFT, - "acrastt/RedPajama-INCITE-Chat-Instruct-3B-V1": ModelType.IFT, - "eachadea/vicuna-13b-1.1": ModelType.FT, - "eachadea/vicuna-7b-1.1": ModelType.FT, - "eachadea/vicuna-13b": ModelType.FT, - "openaccess-ai-collective/wizard-mega-13b": ModelType.IFT, - "openaccess-ai-collective/manticore-13b": ModelType.IFT, - "openaccess-ai-collective/manticore-30b-chat-pyg-alpha": ModelType.IFT, - "openaccess-ai-collective/minotaur-13b": ModelType.IFT, - "openaccess-ai-collective/minotaur-13b-fixed": ModelType.IFT, - "openaccess-ai-collective/hippogriff-30b-chat": ModelType.IFT, - "openaccess-ai-collective/manticore-13b-chat-pyg": ModelType.IFT, - "pythainlp/wangchanglm-7.5B-sft-enth": ModelType.IFT, - "pythainlp/wangchanglm-7.5B-sft-en-sharded": ModelType.IFT, - "euclaise/gpt-neox-122m-minipile-digits": ModelType.FT, - "stabilityai/StableBeluga1-Delta": ModelType.IFT, - "stabilityai/stablelm-tuned-alpha-7b": ModelType.IFT, - "stabilityai/StableBeluga2": ModelType.IFT, - "stabilityai/StableBeluga-13B": ModelType.IFT, - "stabilityai/StableBeluga-7B": ModelType.IFT, - "stabilityai/stablelm-base-alpha-7b": ModelType.PT, - "stabilityai/stablelm-base-alpha-3b": ModelType.PT, - "stabilityai/stablelm-tuned-alpha-3b": ModelType.IFT, - "alibidaran/medical_transcription_generator": ModelType.FT, - "CalderaAI/30B-Lazarus": ModelType.IFT, - "CalderaAI/13B-BlueMethod": ModelType.IFT, - "CalderaAI/13B-Ouroboros": ModelType.IFT, - "KoboldAI/OPT-13B-Erebus": ModelType.FT, - "KoboldAI/GPT-J-6B-Janeway": ModelType.FT, - "KoboldAI/GPT-J-6B-Shinen": ModelType.FT, - "KoboldAI/fairseq-dense-2.7B": ModelType.PT, - "KoboldAI/OPT-6B-nerys-v2": ModelType.FT, - "KoboldAI/GPT-NeoX-20B-Skein": ModelType.FT, - "KoboldAI/PPO_Pygway-6b-Mix": ModelType.FT, - "KoboldAI/fairseq-dense-6.7B": ModelType.PT, - "KoboldAI/fairseq-dense-125M": ModelType.PT, - "KoboldAI/OPT-13B-Nerybus-Mix": ModelType.FT, - "KoboldAI/OPT-2.7B-Erebus": ModelType.FT, - "KoboldAI/OPT-350M-Nerys-v2": ModelType.FT, - "KoboldAI/OPT-2.7B-Nerys-v2": ModelType.FT, - "KoboldAI/OPT-2.7B-Nerybus-Mix": ModelType.FT, - "KoboldAI/OPT-13B-Nerys-v2": ModelType.FT, - "KoboldAI/GPT-NeoX-20B-Erebus": ModelType.FT, - "KoboldAI/OPT-6.7B-Erebus": ModelType.FT, - "KoboldAI/fairseq-dense-355M": ModelType.PT, - "KoboldAI/OPT-6.7B-Nerybus-Mix": ModelType.FT, - "KoboldAI/GPT-J-6B-Adventure": ModelType.FT, - "KoboldAI/OPT-350M-Erebus": ModelType.FT, - "KoboldAI/GPT-J-6B-Skein": ModelType.FT, - "KoboldAI/OPT-30B-Erebus": ModelType.FT, - "klosax/pythia-160m-deduped-step92k-193bt": ModelType.PT, - "klosax/open_llama_3b_350bt_preview": ModelType.PT, - "klosax/openllama-3b-350bt": ModelType.PT, - "klosax/pythia-70m-deduped-step44k-92bt": ModelType.PT, - "klosax/open_llama_13b_600bt_preview": ModelType.PT, - "klosax/open_llama_7b_400bt_preview": ModelType.PT, - "kfkas/Llama-2-ko-7b-Chat": ModelType.IFT, - "WeOpenML/Alpaca-7B-v1": ModelType.IFT, - "WeOpenML/PandaLM-Alpaca-7B-v1": ModelType.IFT, - "TFLai/gpt2-turkish-uncased": ModelType.FT, - "ehartford/WizardLM-13B-Uncensored": ModelType.IFT, - "ehartford/dolphin-llama-13b": ModelType.IFT, - "ehartford/Wizard-Vicuna-30B-Uncensored": ModelType.FT, - "ehartford/WizardLM-30B-Uncensored": ModelType.IFT, - "ehartford/Wizard-Vicuna-13B-Uncensored": ModelType.FT, - "ehartford/WizardLM-7B-Uncensored": ModelType.IFT, - "ehartford/based-30b": ModelType.FT, - "ehartford/Wizard-Vicuna-7B-Uncensored": ModelType.FT, - "wahaha1987/llama_7b_sharegpt94k_fastchat": ModelType.FT, - "wahaha1987/llama_13b_sharegpt94k_fastchat": ModelType.FT, - "OpenAssistant/oasst-sft-1-pythia-12b": ModelType.FT, - "OpenAssistant/stablelm-7b-sft-v7-epoch-3": ModelType.IFT, - "OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5": ModelType.FT, - "OpenAssistant/pythia-12b-sft-v8-2.5k-steps": ModelType.IFT, - "OpenAssistant/pythia-12b-sft-v8-7k-steps": ModelType.IFT, - "OpenAssistant/pythia-12b-pre-v8-12.5k-steps": ModelType.IFT, - "OpenAssistant/llama2-13b-orca-8k-3319": ModelType.IFT, - "junelee/wizard-vicuna-13b": ModelType.FT, - "BreadAi/gpt-YA-1-1_160M": ModelType.PT, - "BreadAi/MuseCan": ModelType.PT, - "BreadAi/MusePy-1-2": ModelType.PT, - "BreadAi/DiscordPy": ModelType.PT, - "BreadAi/PM_modelV2": ModelType.PT, - "BreadAi/gpt-Youtube": ModelType.PT, - "BreadAi/StoryPy": ModelType.FT, - "julianweng/Llama-2-7b-chat-orcah": ModelType.FT, - "AGI-inc/lora_moe_7b_baseline": ModelType.FT, - "AGI-inc/lora_moe_7b": ModelType.FT, - "togethercomputer/GPT-NeoXT-Chat-Base-20B": ModelType.IFT, - "togethercomputer/RedPajama-INCITE-Chat-7B-v0.1": ModelType.IFT, - "togethercomputer/RedPajama-INCITE-Instruct-7B-v0.1": ModelType.IFT, - "togethercomputer/RedPajama-INCITE-7B-Base": ModelType.PT, - "togethercomputer/RedPajama-INCITE-7B-Instruct": ModelType.IFT, - "togethercomputer/RedPajama-INCITE-Base-3B-v1": ModelType.PT, - "togethercomputer/Pythia-Chat-Base-7B": ModelType.IFT, - "togethercomputer/RedPajama-INCITE-Base-7B-v0.1": ModelType.PT, - "togethercomputer/GPT-JT-6B-v1": ModelType.IFT, - "togethercomputer/GPT-JT-6B-v0": ModelType.IFT, - "togethercomputer/RedPajama-INCITE-Chat-3B-v1": ModelType.IFT, - "togethercomputer/RedPajama-INCITE-7B-Chat": ModelType.IFT, - "togethercomputer/RedPajama-INCITE-Instruct-3B-v1": ModelType.IFT, - "Writer/camel-5b-hf": ModelType.IFT, - "Writer/palmyra-base": ModelType.PT, - "MBZUAI/LaMini-GPT-1.5B": ModelType.IFT, - "MBZUAI/lamini-cerebras-111m": ModelType.IFT, - "MBZUAI/lamini-neo-1.3b": ModelType.IFT, - "MBZUAI/lamini-cerebras-1.3b": ModelType.IFT, - "MBZUAI/lamini-cerebras-256m": ModelType.IFT, - "MBZUAI/LaMini-GPT-124M": ModelType.IFT, - "MBZUAI/lamini-neo-125m": ModelType.IFT, - "TehVenom/DiffMerge-DollyGPT-Pygmalion": ModelType.FT, - "TehVenom/PPO_Shygmalion-6b": ModelType.FT, - "TehVenom/Dolly_Shygmalion-6b-Dev_V8P2": ModelType.FT, - "TehVenom/Pygmalion_AlpacaLora-7b": ModelType.FT, - "TehVenom/PPO_Pygway-V8p4_Dev-6b": ModelType.FT, - "TehVenom/Dolly_Malion-6b": ModelType.FT, - "TehVenom/PPO_Shygmalion-V8p4_Dev-6b": ModelType.FT, - "TehVenom/ChanMalion": ModelType.FT, - "TehVenom/GPT-J-Pyg_PPO-6B": ModelType.IFT, - "TehVenom/Pygmalion-13b-Merged": ModelType.FT, - "TehVenom/Metharme-13b-Merged": ModelType.IFT, - "TehVenom/Dolly_Shygmalion-6b": ModelType.FT, - "TehVenom/GPT-J-Pyg_PPO-6B-Dev-V8p4": ModelType.IFT, - "georgesung/llama2_7b_chat_uncensored": ModelType.FT, - "vicgalle/gpt2-alpaca": ModelType.IFT, - "vicgalle/alpaca-7b": ModelType.FT, - "vicgalle/gpt2-alpaca-gpt4": ModelType.IFT, - "facebook/opt-350m": ModelType.PT, - "facebook/opt-125m": ModelType.PT, - "facebook/xglm-4.5B": ModelType.PT, - "facebook/opt-2.7b": ModelType.PT, - "facebook/opt-6.7b": ModelType.PT, - "facebook/galactica-30b": ModelType.PT, - "facebook/opt-13b": ModelType.PT, - "facebook/opt-66b": ModelType.PT, - "facebook/xglm-7.5B": ModelType.PT, - "facebook/xglm-564M": ModelType.PT, - "facebook/opt-30b": ModelType.PT, - "golaxy/gogpt-7b": ModelType.FT, - "golaxy/gogpt2-7b": ModelType.FT, - "golaxy/gogpt-7b-bloom": ModelType.FT, - "golaxy/gogpt-3b-bloom": ModelType.FT, - "psmathur/orca_mini_v2_7b": ModelType.IFT, - "psmathur/orca_mini_7b": ModelType.IFT, - "psmathur/orca_mini_3b": ModelType.IFT, - "psmathur/orca_mini_v2_13b": ModelType.IFT, - "gpt2-xl": ModelType.PT, - "lxe/Cerebras-GPT-2.7B-Alpaca-SP": ModelType.FT, - "Monero/Manticore-13b-Chat-Pyg-Guanaco": ModelType.FT, - "Monero/WizardLM-Uncensored-SuperCOT-StoryTelling-30b": ModelType.IFT, - "Monero/WizardLM-13b-OpenAssistant-Uncensored": ModelType.IFT, - "Monero/WizardLM-30B-Uncensored-Guanaco-SuperCOT-30b": ModelType.IFT, - "jzjiao/opt-1.3b-rlhf": ModelType.FT, - "HuggingFaceH4/starchat-beta": ModelType.IFT, - "KnutJaegersberg/gpt-2-xl-EvolInstruct": ModelType.IFT, - "KnutJaegersberg/megatron-GPT-2-345m-EvolInstruct": ModelType.IFT, - "KnutJaegersberg/galactica-orca-wizardlm-1.3b": ModelType.IFT, - "openchat/openchat_8192": ModelType.IFT, - "openchat/openchat_v2": ModelType.IFT, - "openchat/openchat_v2_w": ModelType.IFT, - "ausboss/llama-13b-supercot": ModelType.IFT, - "ausboss/llama-30b-supercot": ModelType.IFT, - "Neko-Institute-of-Science/metharme-7b": ModelType.IFT, - "Neko-Institute-of-Science/pygmalion-7b": ModelType.FT, - "SebastianSchramm/Cerebras-GPT-111M-instruction": ModelType.IFT, - "victor123/WizardLM-13B-1.0": ModelType.IFT, - "OpenBuddy/openbuddy-openllama-13b-v7-fp16": ModelType.FT, - "OpenBuddy/openbuddy-llama2-13b-v8.1-fp16": ModelType.FT, - "OpenBuddyEA/openbuddy-llama-30b-v7.1-bf16": ModelType.FT, - "baichuan-inc/Baichuan-7B": ModelType.PT, - "tiiuae/falcon-40b-instruct": ModelType.IFT, - "tiiuae/falcon-40b": ModelType.PT, - "tiiuae/falcon-7b": ModelType.PT, - "YeungNLP/firefly-llama-13b": ModelType.FT, - "YeungNLP/firefly-llama-13b-v1.2": ModelType.FT, - "YeungNLP/firefly-llama2-13b": ModelType.FT, - "YeungNLP/firefly-ziya-13b": ModelType.FT, - "shaohang/Sparse0.5_OPT-1.3": ModelType.FT, - "xzuyn/Alpacino-SuperCOT-13B": ModelType.IFT, - "xzuyn/MedicWizard-7B": ModelType.FT, - "xDAN-AI/xDAN_13b_l2_lora": ModelType.FT, - "beomi/KoAlpaca-Polyglot-5.8B": ModelType.FT, - "beomi/llama-2-ko-7b": ModelType.IFT, - "Salesforce/codegen-6B-multi": ModelType.PT, - "Salesforce/codegen-16B-nl": ModelType.PT, - "Salesforce/codegen-6B-nl": ModelType.PT, - "ai-forever/rugpt3large_based_on_gpt2": ModelType.FT, - "gpt2-large": ModelType.PT, - "frank098/orca_mini_3b_juniper": ModelType.FT, - "frank098/WizardLM_13B_juniper": ModelType.FT, - "FPHam/Free_Sydney_13b_HF": ModelType.FT, - "huggingface/llama-13b": ModelType.PT, - "huggingface/llama-7b": ModelType.PT, - "huggingface/llama-65b": ModelType.PT, - "huggingface/llama-30b": ModelType.PT, - "Henk717/chronoboros-33B": ModelType.IFT, - "jondurbin/airoboros-13b-gpt4-1.4": ModelType.IFT, - "jondurbin/airoboros-7b": ModelType.IFT, - "jondurbin/airoboros-7b-gpt4": ModelType.IFT, - "jondurbin/airoboros-7b-gpt4-1.1": ModelType.IFT, - "jondurbin/airoboros-7b-gpt4-1.2": ModelType.IFT, - "jondurbin/airoboros-7b-gpt4-1.3": ModelType.IFT, - "jondurbin/airoboros-7b-gpt4-1.4": ModelType.IFT, - "jondurbin/airoboros-l2-7b-gpt4-1.4.1": ModelType.IFT, - "jondurbin/airoboros-l2-13b-gpt4-1.4.1": ModelType.IFT, - "jondurbin/airoboros-l2-70b-gpt4-1.4.1": ModelType.IFT, - "jondurbin/airoboros-13b": ModelType.IFT, - "jondurbin/airoboros-33b-gpt4-1.4": ModelType.IFT, - "jondurbin/airoboros-33b-gpt4-1.2": ModelType.IFT, - "jondurbin/airoboros-65b-gpt4-1.2": ModelType.IFT, - "ariellee/SuperPlatty-30B": ModelType.IFT, - "danielhanchen/open_llama_3b_600bt_preview": ModelType.FT, - "cerebras/Cerebras-GPT-256M": ModelType.PT, - "cerebras/Cerebras-GPT-1.3B": ModelType.PT, - "cerebras/Cerebras-GPT-13B": ModelType.PT, - "cerebras/Cerebras-GPT-2.7B": ModelType.PT, - "cerebras/Cerebras-GPT-111M": ModelType.PT, - "cerebras/Cerebras-GPT-6.7B": ModelType.PT, - "Yhyu13/oasst-rlhf-2-llama-30b-7k-steps-hf": ModelType.RL, - "Yhyu13/llama-30B-hf-openassitant": ModelType.FT, - "NousResearch/Nous-Hermes-Llama2-13b": ModelType.IFT, - "NousResearch/Nous-Hermes-llama-2-7b": ModelType.IFT, - "NousResearch/Redmond-Puffin-13B": ModelType.IFT, - "NousResearch/Nous-Hermes-13b": ModelType.IFT, - "project-baize/baize-v2-7b": ModelType.IFT, - "project-baize/baize-v2-13b": ModelType.IFT, - "LLMs/WizardLM-13B-V1.0": ModelType.FT, - "LLMs/AlpacaGPT4-7B-elina": ModelType.FT, - "wenge-research/yayi-7b": ModelType.FT, - "wenge-research/yayi-7b-llama2": ModelType.FT, - "wenge-research/yayi-13b-llama2": ModelType.FT, - "yhyhy3/open_llama_7b_v2_med_instruct": ModelType.IFT, - "llama-anon/instruct-13b": ModelType.IFT, - "huggingtweets/jerma985": ModelType.FT, - "huggingtweets/gladosystem": ModelType.FT, - "huggingtweets/bladeecity-jerma985": ModelType.FT, - "huggyllama/llama-13b": ModelType.PT, - "huggyllama/llama-65b": ModelType.PT, - "FabbriSimo01/Facebook_opt_1.3b_Quantized": ModelType.PT, - "upstage/Llama-2-70b-instruct": ModelType.IFT, - "upstage/Llama-2-70b-instruct-1024": ModelType.IFT, - "upstage/llama-65b-instruct": ModelType.IFT, - "upstage/llama-30b-instruct-2048": ModelType.IFT, - "upstage/llama-30b-instruct": ModelType.IFT, - "WizardLM/WizardLM-13B-1.0": ModelType.IFT, - "WizardLM/WizardLM-13B-V1.1": ModelType.IFT, - "WizardLM/WizardLM-13B-V1.2": ModelType.IFT, - "WizardLM/WizardLM-30B-V1.0": ModelType.IFT, - "WizardLM/WizardCoder-15B-V1.0": ModelType.IFT, - "gpt2": ModelType.PT, - "keyfan/vicuna-chinese-replication-v1.1": ModelType.IFT, - "nthngdy/pythia-owt2-70m-100k": ModelType.FT, - "nthngdy/pythia-owt2-70m-50k": ModelType.FT, - "quantumaikr/KoreanLM-hf": ModelType.FT, - "quantumaikr/open_llama_7b_hf": ModelType.FT, - "quantumaikr/QuantumLM-70B-hf": ModelType.IFT, - "MayaPH/FinOPT-Lincoln": ModelType.FT, - "MayaPH/FinOPT-Franklin": ModelType.FT, - "MayaPH/GodziLLa-30B": ModelType.IFT, - "MayaPH/GodziLLa-30B-plus": ModelType.IFT, - "MayaPH/FinOPT-Washington": ModelType.FT, - "ogimgio/gpt-neo-125m-neurallinguisticpioneers": ModelType.FT, - "layoric/llama-2-13b-code-alpaca": ModelType.FT, - "CobraMamba/mamba-gpt-3b": ModelType.FT, - "CobraMamba/mamba-gpt-3b-v2": ModelType.FT, - "CobraMamba/mamba-gpt-3b-v3": ModelType.FT, - "timdettmers/guanaco-33b-merged": ModelType.FT, - "elinas/chronos-33b": ModelType.IFT, - "heegyu/RedTulu-Uncensored-3B-0719": ModelType.IFT, - "heegyu/WizardVicuna-Uncensored-3B-0719": ModelType.IFT, - "heegyu/WizardVicuna-3B-0719": ModelType.IFT, - "meta-llama/Llama-2-7b-chat-hf": ModelType.RL, - "meta-llama/Llama-2-7b-hf": ModelType.PT, - "meta-llama/Llama-2-13b-chat-hf": ModelType.RL, - "meta-llama/Llama-2-13b-hf": ModelType.PT, - "meta-llama/Llama-2-70b-chat-hf": ModelType.RL, - "meta-llama/Llama-2-70b-hf": ModelType.PT, - "xhyi/PT_GPTNEO350_ATG": ModelType.FT, - "h2oai/h2ogpt-gm-oasst1-en-1024-20b": ModelType.FT, - "h2oai/h2ogpt-gm-oasst1-en-1024-open-llama-7b-preview-400bt": ModelType.FT, - "h2oai/h2ogpt-oig-oasst1-512-6_9b": ModelType.IFT, - "h2oai/h2ogpt-oasst1-512-12b": ModelType.IFT, - "h2oai/h2ogpt-oig-oasst1-256-6_9b": ModelType.IFT, - "h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt": ModelType.FT, - "h2oai/h2ogpt-oasst1-512-20b": ModelType.IFT, - "h2oai/h2ogpt-gm-oasst1-en-2048-open-llama-7b-preview-300bt-v2": ModelType.FT, - "h2oai/h2ogpt-gm-oasst1-en-1024-12b": ModelType.FT, - "h2oai/h2ogpt-gm-oasst1-multilang-1024-20b": ModelType.FT, - "bofenghuang/vigogne-13b-instruct": ModelType.IFT, - "bofenghuang/vigogne-13b-chat": ModelType.FT, - "bofenghuang/vigogne-2-7b-instruct": ModelType.IFT, - "bofenghuang/vigogne-7b-instruct": ModelType.IFT, - "bofenghuang/vigogne-7b-chat": ModelType.FT, - "Vmware/open-llama-7b-v2-open-instruct": ModelType.IFT, - "VMware/open-llama-0.7T-7B-open-instruct-v1.1": ModelType.IFT, - "ewof/koishi-instruct-3b": ModelType.IFT, - "gywy/llama2-13b-chinese-v1": ModelType.FT, - "GOAT-AI/GOAT-7B-Community": ModelType.FT, - "psyche/kollama2-7b": ModelType.FT, - "TheTravellingEngineer/llama2-7b-hf-guanaco": ModelType.FT, - "beaugogh/pythia-1.4b-deduped-sharegpt": ModelType.FT, - "augtoma/qCammel-70-x": ModelType.IFT, - "Lajonbot/Llama-2-7b-chat-hf-instruct-pl-lora_unload": ModelType.IFT, - "anhnv125/pygmalion-6b-roleplay": ModelType.FT, - "64bits/LexPodLM-13B": ModelType.FT, -} - - -def model_type_from_str(type): - if "fine-tuned" in type or "🔶" in type: - return ModelType.FT - if "pretrained" in type or "🟢" in type: - return ModelType.PT - if "RL-tuned" in type or "🟦" in type: - return ModelType.RL - if "instruction-tuned" in type or "⭕" in type: - return ModelType.IFT - return ModelType.Unknown diff --git a/spaces/HugoDzz/spaceship_drift/build/game/index.audio.worklet.js b/spaces/HugoDzz/spaceship_drift/build/game/index.audio.worklet.js deleted file mode 100644 index d9330c735f3da52a20f5e54e0b463ac03b7dff70..0000000000000000000000000000000000000000 --- a/spaces/HugoDzz/spaceship_drift/build/game/index.audio.worklet.js +++ /dev/null @@ -1,211 +0,0 @@ -/**************************************************************************/ -/* audio.worklet.js */ -/**************************************************************************/ -/* This file is part of: */ -/* GODOT ENGINE */ -/* https://godotengine.org */ -/**************************************************************************/ -/* Copyright (c) 2014-present Godot Engine contributors (see AUTHORS.md). */ -/* Copyright (c) 2007-2014 Juan Linietsky, Ariel Manzur. */ -/* */ -/* Permission is hereby granted, free of charge, to any person obtaining */ -/* a copy of this software and associated documentation files (the */ -/* "Software"), to deal in the Software without restriction, including */ -/* without limitation the rights to use, copy, modify, merge, publish, */ -/* distribute, sublicense, and/or sell copies of the Software, and to */ -/* permit persons to whom the Software is furnished to do so, subject to */ -/* the following conditions: */ -/* */ -/* The above copyright notice and this permission notice shall be */ -/* included in all copies or substantial portions of the Software. */ -/* */ -/* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, */ -/* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF */ -/* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. */ -/* IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY */ -/* CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, */ -/* TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE */ -/* SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ -/**************************************************************************/ - -class RingBuffer { - constructor(p_buffer, p_state, p_threads) { - this.buffer = p_buffer; - this.avail = p_state; - this.threads = p_threads; - this.rpos = 0; - this.wpos = 0; - } - - data_left() { - return this.threads ? Atomics.load(this.avail, 0) : this.avail; - } - - space_left() { - return this.buffer.length - this.data_left(); - } - - read(output) { - const size = this.buffer.length; - let from = 0; - let to_write = output.length; - if (this.rpos + to_write > size) { - const high = size - this.rpos; - output.set(this.buffer.subarray(this.rpos, size)); - from = high; - to_write -= high; - this.rpos = 0; - } - if (to_write) { - output.set(this.buffer.subarray(this.rpos, this.rpos + to_write), from); - } - this.rpos += to_write; - if (this.threads) { - Atomics.add(this.avail, 0, -output.length); - Atomics.notify(this.avail, 0); - } else { - this.avail -= output.length; - } - } - - write(p_buffer) { - const to_write = p_buffer.length; - const mw = this.buffer.length - this.wpos; - if (mw >= to_write) { - this.buffer.set(p_buffer, this.wpos); - this.wpos += to_write; - if (mw === to_write) { - this.wpos = 0; - } - } else { - const high = p_buffer.subarray(0, mw); - const low = p_buffer.subarray(mw); - this.buffer.set(high, this.wpos); - this.buffer.set(low); - this.wpos = low.length; - } - if (this.threads) { - Atomics.add(this.avail, 0, to_write); - Atomics.notify(this.avail, 0); - } else { - this.avail += to_write; - } - } -} - -class GodotProcessor extends AudioWorkletProcessor { - constructor() { - super(); - this.threads = false; - this.running = true; - this.lock = null; - this.notifier = null; - this.output = null; - this.output_buffer = new Float32Array(); - this.input = null; - this.input_buffer = new Float32Array(); - this.port.onmessage = (event) => { - const cmd = event.data['cmd']; - const data = event.data['data']; - this.parse_message(cmd, data); - }; - } - - process_notify() { - if (this.notifier) { - Atomics.add(this.notifier, 0, 1); - Atomics.notify(this.notifier, 0); - } - } - - parse_message(p_cmd, p_data) { - if (p_cmd === 'start' && p_data) { - const state = p_data[0]; - let idx = 0; - this.threads = true; - this.lock = state.subarray(idx, ++idx); - this.notifier = state.subarray(idx, ++idx); - const avail_in = state.subarray(idx, ++idx); - const avail_out = state.subarray(idx, ++idx); - this.input = new RingBuffer(p_data[1], avail_in, true); - this.output = new RingBuffer(p_data[2], avail_out, true); - } else if (p_cmd === 'stop') { - this.running = false; - this.output = null; - this.input = null; - } else if (p_cmd === 'start_nothreads') { - this.output = new RingBuffer(p_data[0], p_data[0].length, false); - } else if (p_cmd === 'chunk') { - this.output.write(p_data); - } - } - - static array_has_data(arr) { - return arr.length && arr[0].length && arr[0][0].length; - } - - process(inputs, outputs, parameters) { - if (!this.running) { - return false; // Stop processing. - } - if (this.output === null) { - return true; // Not ready yet, keep processing. - } - const process_input = GodotProcessor.array_has_data(inputs); - if (process_input) { - const input = inputs[0]; - const chunk = input[0].length * input.length; - if (this.input_buffer.length !== chunk) { - this.input_buffer = new Float32Array(chunk); - } - if (!this.threads) { - GodotProcessor.write_input(this.input_buffer, input); - this.port.postMessage({ 'cmd': 'input', 'data': this.input_buffer }); - } else if (this.input.space_left() >= chunk) { - GodotProcessor.write_input(this.input_buffer, input); - this.input.write(this.input_buffer); - } else { - this.port.postMessage('Input buffer is full! Skipping input frame.'); - } - } - const process_output = GodotProcessor.array_has_data(outputs); - if (process_output) { - const output = outputs[0]; - const chunk = output[0].length * output.length; - if (this.output_buffer.length !== chunk) { - this.output_buffer = new Float32Array(chunk); - } - if (this.output.data_left() >= chunk) { - this.output.read(this.output_buffer); - GodotProcessor.write_output(output, this.output_buffer); - if (!this.threads) { - this.port.postMessage({ 'cmd': 'read', 'data': chunk }); - } - } else { - this.port.postMessage('Output buffer has not enough frames! Skipping output frame.'); - } - } - this.process_notify(); - return true; - } - - static write_output(dest, source) { - const channels = dest.length; - for (let ch = 0; ch < channels; ch++) { - for (let sample = 0; sample < dest[ch].length; sample++) { - dest[ch][sample] = source[sample * channels + ch]; - } - } - } - - static write_input(dest, source) { - const channels = source.length; - for (let ch = 0; ch < channels; ch++) { - for (let sample = 0; sample < source[ch].length; sample++) { - dest[sample * channels + ch] = source[ch][sample]; - } - } - } -} - -registerProcessor('godot-processor', GodotProcessor); diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/legacy/masked_lm_dictionary.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/legacy/masked_lm_dictionary.py deleted file mode 100644 index dee88f7a3ed72ea465ea4e8ffe7b1c01ff6f57f1..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/data/legacy/masked_lm_dictionary.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.data import Dictionary - - -class MaskedLMDictionary(Dictionary): - """ - Dictionary for Masked Language Modelling tasks. This extends Dictionary by - adding the mask symbol. - """ - - def __init__( - self, - pad="", - eos="", - unk="", - mask="", - ): - super().__init__(pad=pad, eos=eos, unk=unk) - self.mask_word = mask - self.mask_index = self.add_symbol(mask) - self.nspecial = len(self.symbols) - - def mask(self): - """Helper to get index of mask symbol""" - return self.mask_index - - -class BertDictionary(MaskedLMDictionary): - """ - Dictionary for BERT task. This extends MaskedLMDictionary by adding support - for cls and sep symbols. - """ - - def __init__( - self, - pad="", - eos="", - unk="", - mask="", - cls="", - sep="", - ): - super().__init__(pad=pad, eos=eos, unk=unk, mask=mask) - self.cls_word = cls - self.sep_word = sep - self.cls_index = self.add_symbol(cls) - self.sep_index = self.add_symbol(sep) - self.nspecial = len(self.symbols) - - def cls(self): - """Helper to get index of cls symbol""" - return self.cls_index - - def sep(self): - """Helper to get index of sep symbol""" - return self.sep_index diff --git a/spaces/Iceclear/StableSR/StableSR/basicsr/archs/degradat_arch.py b/spaces/Iceclear/StableSR/StableSR/basicsr/archs/degradat_arch.py deleted file mode 100644 index ce09ad666a90f175fb6268435073b314df543813..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/basicsr/archs/degradat_arch.py +++ /dev/null @@ -1,90 +0,0 @@ -from torch import nn as nn - -from basicsr.archs.arch_util import ResidualBlockNoBN, default_init_weights -from basicsr.utils.registry import ARCH_REGISTRY - -@ARCH_REGISTRY.register() -class DEResNet(nn.Module): - """Degradation Estimator with ResNetNoBN arch. v2.1, no vector anymore - As shown in paper 'Towards Flexible Blind JPEG Artifacts Removal', - resnet arch works for image quality estimation. - Args: - num_in_ch (int): channel number of inputs. Default: 3. - num_degradation (int): num of degradation the DE should estimate. Default: 2(blur+noise). - degradation_embed_size (int): embedding size of each degradation vector. - degradation_degree_actv (int): activation function for degradation degree scalar. Default: sigmoid. - num_feats (list): channel number of each stage. - num_blocks (list): residual block of each stage. - downscales (list): downscales of each stage. - """ - - def __init__(self, - num_in_ch=3, - num_degradation=2, - degradation_degree_actv='sigmoid', - num_feats=(64, 128, 256, 512), - num_blocks=(2, 2, 2, 2), - downscales=(2, 2, 2, 1)): - super(DEResNet, self).__init__() - - assert isinstance(num_feats, list) - assert isinstance(num_blocks, list) - assert isinstance(downscales, list) - assert len(num_feats) == len(num_blocks) and len(num_feats) == len(downscales) - - num_stage = len(num_feats) - - self.conv_first = nn.ModuleList() - for _ in range(num_degradation): - self.conv_first.append(nn.Conv2d(num_in_ch, num_feats[0], 3, 1, 1)) - self.body = nn.ModuleList() - for _ in range(num_degradation): - body = list() - for stage in range(num_stage): - for _ in range(num_blocks[stage]): - body.append(ResidualBlockNoBN(num_feats[stage])) - if downscales[stage] == 1: - if stage < num_stage - 1 and num_feats[stage] != num_feats[stage + 1]: - body.append(nn.Conv2d(num_feats[stage], num_feats[stage + 1], 3, 1, 1)) - continue - elif downscales[stage] == 2: - body.append(nn.Conv2d(num_feats[stage], num_feats[min(stage + 1, num_stage - 1)], 3, 2, 1)) - else: - raise NotImplementedError - self.body.append(nn.Sequential(*body)) - - # self.body = nn.Sequential(*body) - - self.num_degradation = num_degradation - self.fc_degree = nn.ModuleList() - if degradation_degree_actv == 'sigmoid': - actv = nn.Sigmoid - elif degradation_degree_actv == 'tanh': - actv = nn.Tanh - else: - raise NotImplementedError(f'only sigmoid and tanh are supported for degradation_degree_actv, ' - f'{degradation_degree_actv} is not supported yet.') - for _ in range(num_degradation): - self.fc_degree.append( - nn.Sequential( - nn.Linear(num_feats[-1], 512), - nn.ReLU(inplace=True), - nn.Linear(512, 1), - actv(), - )) - - self.avg_pool = nn.AdaptiveAvgPool2d(1) - - default_init_weights([self.conv_first, self.body, self.fc_degree], 0.1) - - def forward(self, x): - degrees = [] - for i in range(self.num_degradation): - x_out = self.conv_first[i](x) - feat = self.body[i](x_out) - feat = self.avg_pool(feat) - feat = feat.squeeze(-1).squeeze(-1) - # for i in range(self.num_degradation): - degrees.append(self.fc_degree[i](feat).squeeze(-1)) - - return degrees diff --git a/spaces/Illumotion/Koboldcpp/.devops/full-cuda.Dockerfile b/spaces/Illumotion/Koboldcpp/.devops/full-cuda.Dockerfile deleted file mode 100644 index 360602d6567b8a835f23e7365cf222ca7299c1d7..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/.devops/full-cuda.Dockerfile +++ /dev/null @@ -1,33 +0,0 @@ -ARG UBUNTU_VERSION=22.04 - -# This needs to generally match the container host's environment. -ARG CUDA_VERSION=11.7.1 - -# Target the CUDA build image -ARG BASE_CUDA_DEV_CONTAINER=nvidia/cuda:${CUDA_VERSION}-devel-ubuntu${UBUNTU_VERSION} - -FROM ${BASE_CUDA_DEV_CONTAINER} as build - -# Unless otherwise specified, we make a fat build. -ARG CUDA_DOCKER_ARCH=all - -RUN apt-get update && \ - apt-get install -y build-essential python3 python3-pip git - -COPY requirements.txt requirements.txt - -RUN pip install --upgrade pip setuptools wheel \ - && pip install -r requirements.txt - -WORKDIR /app - -COPY . . - -# Set nvcc architecture -ENV CUDA_DOCKER_ARCH=${CUDA_DOCKER_ARCH} -# Enable cuBLAS -ENV LLAMA_CUBLAS=1 - -RUN make - -ENTRYPOINT ["/app/.devops/tools.sh"] diff --git a/spaces/Illumotion/Koboldcpp/expose.h b/spaces/Illumotion/Koboldcpp/expose.h deleted file mode 100644 index c2fbb22670fe13fecc7a31d21e2dae12c6fe68ad..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/expose.h +++ /dev/null @@ -1,91 +0,0 @@ -#pragma once - -const int stop_token_max = 16; -const int ban_token_max = 16; -const int tensor_split_max = 16; -// match kobold's sampler list and order -enum samplers -{ - KCPP_SAMPLER_TOP_K=0, - KCPP_SAMPLER_TOP_A=1, - KCPP_SAMPLER_TOP_P=2, - KCPP_SAMPLER_TFS=3, - KCPP_SAMPLER_TYP=4, - KCPP_SAMPLER_TEMP=5, - KCPP_SAMPLER_REP_PEN=6, - KCPP_SAMPLER_MAX -}; -enum stop_reason -{ - INVALID=-1, - OUT_OF_TOKENS=0, - EOS_TOKEN=1, - CUSTOM_STOPPER=2, -}; -struct load_model_inputs -{ - const int threads; - const int blasthreads; - const int max_context_length; - const int batch_size; - const bool f16_kv; - const bool low_vram; - const bool use_mmq; - const char * executable_path; - const char * model_filename; - const char * lora_filename; - const char * lora_base; - const bool use_mmap; - const bool use_mlock; - const bool use_smartcontext; - const int clblast_info = 0; - const int cublas_info = 0; - const int blasbatchsize = 512; - const int debugmode = 0; - const int forceversion = 0; - const int gpulayers = 0; - const float rope_freq_scale = 1.0f; - const float rope_freq_base = 10000.0f; - const char * banned_tokens[ban_token_max]; - const float tensor_split[tensor_split_max]; -}; -struct generation_inputs -{ - const int seed; - const char *prompt; - const int max_context_length; - const int max_length; - const float temperature; - const int top_k; - const float top_a = 0.0f; - const float top_p; - const float typical_p; - const float tfs; - const float rep_pen; - const int rep_pen_range; - const int mirostat = 0; - const float mirostat_eta; - const float mirostat_tau; - const samplers sampler_order[KCPP_SAMPLER_MAX]; - const int sampler_len; - const bool unban_tokens_rt; - const char * stop_sequence[stop_token_max]; - const bool stream_sse; - const char * grammar; - const bool grammar_retain_state; -}; -struct generation_outputs -{ - int status = -1; - char text[24576]; //24kb should be enough for any response -}; - -extern std::string executable_path; -extern std::string lora_filename; -extern std::string lora_base; -extern std::vector generated_tokens; -extern bool generation_finished; -extern float last_eval_time; -extern float last_process_time; -extern int last_token_count; -extern stop_reason last_stop_reason; diff --git a/spaces/Illumotion/Koboldcpp/include/CL/Utils/Detail.hpp b/spaces/Illumotion/Koboldcpp/include/CL/Utils/Detail.hpp deleted file mode 100644 index 49cccd02cff0c4e4e0ea6493613cc92d638d962c..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/include/CL/Utils/Detail.hpp +++ /dev/null @@ -1,84 +0,0 @@ -#pragma once - -// STL includes -#include -#include // std::forward, std::integer_sequence -#include // std::tuple, std::get -#include // std::initializer_list - -namespace cl { -namespace util { - namespace detail { - // Borrowed from: - // https://www.fluentcpp.com/2019/03/05/for_each_arg-applying-a-function-to-each-argument-of-a-function-in-cpp/ - template F for_each_arg(F f, Args&&... args) - { - (void)std::initializer_list{ ( - (void)f(std::forward(args)), 0)... }; - return f; - } - - namespace impl { - // Borrowed from: https://stackoverflow.com/a/16387374/1476661 - template - void for_each_in_tuple(T&& t, F&& f, - std::integer_sequence) - { - auto l = { - (std::forward(f)(std::get(std::forward(t))), 0)... - }; - (void)l; - } - } - template - void for_each_in_tuple(std::tuple const& t, F&& f) - { - impl::for_each_in_tuple( - t, std::forward(f), - std::make_integer_sequence()); - } - - namespace impl { - // Borrowed from - // https://codereview.stackexchange.com/questions/193420/apply-a-function-to-each-element-of-a-tuple-map-a-tuple - template - auto transform_tuple(Tuple&& t, F&& f, std::index_sequence) - { - return std::make_tuple(std::forward(f)(std::get(t))...); - } - } - template - auto transform_tuple(const std::tuple& t, F&& f) - { - return impl::transform_tuple( - t, std::forward(f), - std::make_index_sequence{}); - } - - namespace impl { - // Borrowed from - // http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n3658.html - // with modifications of Casey Carter at - // https://stackoverflow.com/a/51365112/1476661 - template - auto apply(F&& f, Tuple&& args, std::index_sequence) - -> decltype(std::forward(f)( - std::get(std::forward(args))...)) - { - return std::forward(f)( - std::get(std::forward(args))...); - } - } - template >::value>> - auto apply(F&& f, Tuple&& args) - -> decltype(impl::apply(std::forward(f), - std::forward(args), Indices())) - { - return impl::apply(std::forward(f), std::forward(args), - Indices()); - } - } -} -} diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/modules/base.py b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/modules/base.py deleted file mode 100644 index a50c3fc7753a0bba64a5ab8c1ed64ff97e62313f..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/training/modules/base.py +++ /dev/null @@ -1,80 +0,0 @@ -import abc -from typing import Tuple, List - -import torch -import torch.nn as nn - -from saicinpainting.training.modules.depthwise_sep_conv import DepthWiseSeperableConv -from saicinpainting.training.modules.multidilated_conv import MultidilatedConv - - -class BaseDiscriminator(nn.Module): - @abc.abstractmethod - def forward(self, x: torch.Tensor) -> Tuple[torch.Tensor, List[torch.Tensor]]: - """ - Predict scores and get intermediate activations. Useful for feature matching loss - :return tuple (scores, list of intermediate activations) - """ - raise NotImplemented() - - -def get_conv_block_ctor(kind='default'): - if not isinstance(kind, str): - return kind - if kind == 'default': - return nn.Conv2d - if kind == 'depthwise': - return DepthWiseSeperableConv - if kind == 'multidilated': - return MultidilatedConv - raise ValueError(f'Unknown convolutional block kind {kind}') - - -def get_norm_layer(kind='bn'): - if not isinstance(kind, str): - return kind - if kind == 'bn': - return nn.BatchNorm2d - if kind == 'in': - return nn.InstanceNorm2d - raise ValueError(f'Unknown norm block kind {kind}') - - -def get_activation(kind='tanh'): - if kind == 'tanh': - return nn.Tanh() - if kind == 'sigmoid': - return nn.Sigmoid() - if kind is False: - return nn.Identity() - raise ValueError(f'Unknown activation kind {kind}') - - -class SimpleMultiStepGenerator(nn.Module): - def __init__(self, steps: List[nn.Module]): - super().__init__() - self.steps = nn.ModuleList(steps) - - def forward(self, x): - cur_in = x - outs = [] - for step in self.steps: - cur_out = step(cur_in) - outs.append(cur_out) - cur_in = torch.cat((cur_in, cur_out), dim=1) - return torch.cat(outs[::-1], dim=1) - -def deconv_factory(kind, ngf, mult, norm_layer, activation, max_features): - if kind == 'convtranspose': - return [nn.ConvTranspose2d(min(max_features, ngf * mult), - min(max_features, int(ngf * mult / 2)), - kernel_size=3, stride=2, padding=1, output_padding=1), - norm_layer(min(max_features, int(ngf * mult / 2))), activation] - elif kind == 'bilinear': - return [nn.Upsample(scale_factor=2, mode='bilinear'), - DepthWiseSeperableConv(min(max_features, ngf * mult), - min(max_features, int(ngf * mult / 2)), - kernel_size=3, stride=1, padding=1), - norm_layer(min(max_features, int(ngf * mult / 2))), activation] - else: - raise Exception(f"Invalid deconv kind: {kind}") \ No newline at end of file diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/schedulers/scheduling_ddim.py b/spaces/Jackflack09/diffuse-custom/diffusers/schedulers/scheduling_ddim.py deleted file mode 100644 index dd38bd63c27d9665d6e4d97ea5068eb40ec44c5f..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/schedulers/scheduling_ddim.py +++ /dev/null @@ -1,380 +0,0 @@ -# Copyright 2022 Stanford University Team and The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# DISCLAIMER: This code is strongly influenced by https://github.com/pesser/pytorch_diffusion -# and https://github.com/hojonathanho/diffusion - -import math -from dataclasses import dataclass -from typing import List, Optional, Tuple, Union - -import numpy as np -import torch - -from ..configuration_utils import ConfigMixin, register_to_config -from ..utils import _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS, BaseOutput, deprecate -from .scheduling_utils import SchedulerMixin - - -@dataclass -# Copied from diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput with DDPM->DDIM -class DDIMSchedulerOutput(BaseOutput): - """ - Output class for the scheduler's step function output. - - Args: - prev_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images): - Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the - denoising loop. - pred_original_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images): - The predicted denoised sample (x_{0}) based on the model output from the current timestep. - `pred_original_sample` can be used to preview progress or for guidance. - """ - - prev_sample: torch.FloatTensor - pred_original_sample: Optional[torch.FloatTensor] = None - - -def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999) -> torch.Tensor: - """ - Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of - (1-beta) over time from t = [0,1]. - - Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up - to that part of the diffusion process. - - - Args: - num_diffusion_timesteps (`int`): the number of betas to produce. - max_beta (`float`): the maximum beta to use; use values lower than 1 to - prevent singularities. - - Returns: - betas (`np.ndarray`): the betas used by the scheduler to step the model outputs - """ - - def alpha_bar(time_step): - return math.cos((time_step + 0.008) / 1.008 * math.pi / 2) ** 2 - - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta)) - return torch.tensor(betas) - - -class DDIMScheduler(SchedulerMixin, ConfigMixin): - """ - Denoising diffusion implicit models is a scheduler that extends the denoising procedure introduced in denoising - diffusion probabilistic models (DDPMs) with non-Markovian guidance. - - [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__` - function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`. - [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and - [`~SchedulerMixin.from_pretrained`] functions. - - For more details, see the original paper: https://arxiv.org/abs/2010.02502 - - Args: - num_train_timesteps (`int`): number of diffusion steps used to train the model. - beta_start (`float`): the starting `beta` value of inference. - beta_end (`float`): the final `beta` value. - beta_schedule (`str`): - the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from - `linear`, `scaled_linear`, or `squaredcos_cap_v2`. - trained_betas (`np.ndarray`, optional): - option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc. - clip_sample (`bool`, default `True`): - option to clip predicted sample between -1 and 1 for numerical stability. - set_alpha_to_one (`bool`, default `True`): - each diffusion step uses the value of alphas product at that step and at the previous one. For the final - step there is no previous alpha. When this option is `True` the previous alpha product is fixed to `1`, - otherwise it uses the value of alpha at step 0. - steps_offset (`int`, default `0`): - an offset added to the inference steps. You can use a combination of `offset=1` and - `set_alpha_to_one=False`, to make the last step use step 0 for the previous alpha product, as done in - stable diffusion. - prediction_type (`str`, default `epsilon`, optional): - prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion - process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4 - https://imagen.research.google/video/paper.pdf) - """ - - _compatibles = _COMPATIBLE_STABLE_DIFFUSION_SCHEDULERS.copy() - _deprecated_kwargs = ["predict_epsilon"] - order = 1 - - @register_to_config - def __init__( - self, - num_train_timesteps: int = 1000, - beta_start: float = 0.0001, - beta_end: float = 0.02, - beta_schedule: str = "linear", - trained_betas: Optional[Union[np.ndarray, List[float]]] = None, - clip_sample: bool = True, - set_alpha_to_one: bool = True, - steps_offset: int = 0, - prediction_type: str = "epsilon", - **kwargs, - ): - message = ( - "Please make sure to instantiate your scheduler with `prediction_type` instead. E.g. `scheduler =" - " DDIMScheduler.from_pretrained(, prediction_type='epsilon')`." - ) - predict_epsilon = deprecate("predict_epsilon", "0.11.0", message, take_from=kwargs) - if predict_epsilon is not None: - self.register_to_config(prediction_type="epsilon" if predict_epsilon else "sample") - - if trained_betas is not None: - self.betas = torch.tensor(trained_betas, dtype=torch.float32) - elif beta_schedule == "linear": - self.betas = torch.linspace(beta_start, beta_end, num_train_timesteps, dtype=torch.float32) - elif beta_schedule == "scaled_linear": - # this schedule is very specific to the latent diffusion model. - self.betas = ( - torch.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32) ** 2 - ) - elif beta_schedule == "squaredcos_cap_v2": - # Glide cosine schedule - self.betas = betas_for_alpha_bar(num_train_timesteps) - else: - raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}") - - self.alphas = 1.0 - self.betas - self.alphas_cumprod = torch.cumprod(self.alphas, dim=0) - - # At every step in ddim, we are looking into the previous alphas_cumprod - # For the final step, there is no previous alphas_cumprod because we are already at 0 - # `set_alpha_to_one` decides whether we set this parameter simply to one or - # whether we use the final alpha of the "non-previous" one. - self.final_alpha_cumprod = torch.tensor(1.0) if set_alpha_to_one else self.alphas_cumprod[0] - - # standard deviation of the initial noise distribution - self.init_noise_sigma = 1.0 - - # setable values - self.num_inference_steps = None - self.timesteps = torch.from_numpy(np.arange(0, num_train_timesteps)[::-1].copy().astype(np.int64)) - - def scale_model_input(self, sample: torch.FloatTensor, timestep: Optional[int] = None) -> torch.FloatTensor: - """ - Ensures interchangeability with schedulers that need to scale the denoising model input depending on the - current timestep. - - Args: - sample (`torch.FloatTensor`): input sample - timestep (`int`, optional): current timestep - - Returns: - `torch.FloatTensor`: scaled input sample - """ - return sample - - def _get_variance(self, timestep, prev_timestep): - alpha_prod_t = self.alphas_cumprod[timestep] - alpha_prod_t_prev = self.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else self.final_alpha_cumprod - beta_prod_t = 1 - alpha_prod_t - beta_prod_t_prev = 1 - alpha_prod_t_prev - - variance = (beta_prod_t_prev / beta_prod_t) * (1 - alpha_prod_t / alpha_prod_t_prev) - - return variance - - def set_timesteps(self, num_inference_steps: int, device: Union[str, torch.device] = None): - """ - Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference. - - Args: - num_inference_steps (`int`): - the number of diffusion steps used when generating samples with a pre-trained model. - """ - self.num_inference_steps = num_inference_steps - step_ratio = self.config.num_train_timesteps // self.num_inference_steps - # creates integer timesteps by multiplying by ratio - # casting to int to avoid issues when num_inference_step is power of 3 - timesteps = (np.arange(0, num_inference_steps) * step_ratio).round()[::-1].copy().astype(np.int64) - self.timesteps = torch.from_numpy(timesteps).to(device) - self.timesteps += self.config.steps_offset - - def step( - self, - model_output: torch.FloatTensor, - timestep: int, - sample: torch.FloatTensor, - eta: float = 0.0, - use_clipped_model_output: bool = False, - generator=None, - variance_noise: Optional[torch.FloatTensor] = None, - return_dict: bool = True, - ) -> Union[DDIMSchedulerOutput, Tuple]: - """ - Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion - process from the learned model outputs (most often the predicted noise). - - Args: - model_output (`torch.FloatTensor`): direct output from learned diffusion model. - timestep (`int`): current discrete timestep in the diffusion chain. - sample (`torch.FloatTensor`): - current instance of sample being created by diffusion process. - eta (`float`): weight of noise for added noise in diffusion step. - use_clipped_model_output (`bool`): if `True`, compute "corrected" `model_output` from the clipped - predicted original sample. Necessary because predicted original sample is clipped to [-1, 1] when - `self.config.clip_sample` is `True`. If no clipping has happened, "corrected" `model_output` would - coincide with the one provided as input and `use_clipped_model_output` will have not effect. - generator: random number generator. - variance_noise (`torch.FloatTensor`): instead of generating noise for the variance using `generator`, we - can directly provide the noise for the variance itself. This is useful for methods such as - CycleDiffusion. (https://arxiv.org/abs/2210.05559) - return_dict (`bool`): option for returning tuple rather than DDIMSchedulerOutput class - - Returns: - [`~schedulers.scheduling_utils.DDIMSchedulerOutput`] or `tuple`: - [`~schedulers.scheduling_utils.DDIMSchedulerOutput`] if `return_dict` is True, otherwise a `tuple`. When - returning a tuple, the first element is the sample tensor. - - """ - if self.num_inference_steps is None: - raise ValueError( - "Number of inference steps is 'None', you need to run 'set_timesteps' after creating the scheduler" - ) - - # See formulas (12) and (16) of DDIM paper https://arxiv.org/pdf/2010.02502.pdf - # Ideally, read DDIM paper in-detail understanding - - # Notation ( -> - # - pred_noise_t -> e_theta(x_t, t) - # - pred_original_sample -> f_theta(x_t, t) or x_0 - # - std_dev_t -> sigma_t - # - eta -> η - # - pred_sample_direction -> "direction pointing to x_t" - # - pred_prev_sample -> "x_t-1" - - # 1. get previous step value (=t-1) - prev_timestep = timestep - self.config.num_train_timesteps // self.num_inference_steps - - # 2. compute alphas, betas - alpha_prod_t = self.alphas_cumprod[timestep] - alpha_prod_t_prev = self.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else self.final_alpha_cumprod - - beta_prod_t = 1 - alpha_prod_t - - # 3. compute predicted original sample from predicted noise also called - # "predicted x_0" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf - if self.config.prediction_type == "epsilon": - pred_original_sample = (sample - beta_prod_t ** (0.5) * model_output) / alpha_prod_t ** (0.5) - elif self.config.prediction_type == "sample": - pred_original_sample = model_output - elif self.config.prediction_type == "v_prediction": - pred_original_sample = (alpha_prod_t**0.5) * sample - (beta_prod_t**0.5) * model_output - # predict V - model_output = (alpha_prod_t**0.5) * model_output + (beta_prod_t**0.5) * sample - else: - raise ValueError( - f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample`, or" - " `v_prediction`" - ) - - # 4. Clip "predicted x_0" - if self.config.clip_sample: - pred_original_sample = torch.clamp(pred_original_sample, -1, 1) - - # 5. compute variance: "sigma_t(η)" -> see formula (16) - # σ_t = sqrt((1 − α_t−1)/(1 − α_t)) * sqrt(1 − α_t/α_t−1) - variance = self._get_variance(timestep, prev_timestep) - std_dev_t = eta * variance ** (0.5) - - if use_clipped_model_output: - # the model_output is always re-derived from the clipped x_0 in Glide - model_output = (sample - alpha_prod_t ** (0.5) * pred_original_sample) / beta_prod_t ** (0.5) - - # 6. compute "direction pointing to x_t" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf - pred_sample_direction = (1 - alpha_prod_t_prev - std_dev_t**2) ** (0.5) * model_output - - # 7. compute x_t without "random noise" of formula (12) from https://arxiv.org/pdf/2010.02502.pdf - prev_sample = alpha_prod_t_prev ** (0.5) * pred_original_sample + pred_sample_direction - - if eta > 0: - # randn_like does not support generator https://github.com/pytorch/pytorch/issues/27072 - device = model_output.device - if variance_noise is not None and generator is not None: - raise ValueError( - "Cannot pass both generator and variance_noise. Please make sure that either `generator` or" - " `variance_noise` stays `None`." - ) - - if variance_noise is None: - if device.type == "mps": - # randn does not work reproducibly on mps - variance_noise = torch.randn(model_output.shape, dtype=model_output.dtype, generator=generator) - variance_noise = variance_noise.to(device) - else: - variance_noise = torch.randn( - model_output.shape, generator=generator, device=device, dtype=model_output.dtype - ) - variance = self._get_variance(timestep, prev_timestep) ** (0.5) * eta * variance_noise - - prev_sample = prev_sample + variance - - if not return_dict: - return (prev_sample,) - - return DDIMSchedulerOutput(prev_sample=prev_sample, pred_original_sample=pred_original_sample) - - def add_noise( - self, - original_samples: torch.FloatTensor, - noise: torch.FloatTensor, - timesteps: torch.IntTensor, - ) -> torch.FloatTensor: - # Make sure alphas_cumprod and timestep have same device and dtype as original_samples - self.alphas_cumprod = self.alphas_cumprod.to(device=original_samples.device, dtype=original_samples.dtype) - timesteps = timesteps.to(original_samples.device) - - sqrt_alpha_prod = self.alphas_cumprod[timesteps] ** 0.5 - sqrt_alpha_prod = sqrt_alpha_prod.flatten() - while len(sqrt_alpha_prod.shape) < len(original_samples.shape): - sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1) - - sqrt_one_minus_alpha_prod = (1 - self.alphas_cumprod[timesteps]) ** 0.5 - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten() - while len(sqrt_one_minus_alpha_prod.shape) < len(original_samples.shape): - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1) - - noisy_samples = sqrt_alpha_prod * original_samples + sqrt_one_minus_alpha_prod * noise - return noisy_samples - - def get_velocity( - self, sample: torch.FloatTensor, noise: torch.FloatTensor, timesteps: torch.IntTensor - ) -> torch.FloatTensor: - # Make sure alphas_cumprod and timestep have same device and dtype as sample - self.alphas_cumprod = self.alphas_cumprod.to(device=sample.device, dtype=sample.dtype) - timesteps = timesteps.to(sample.device) - - sqrt_alpha_prod = self.alphas_cumprod[timesteps] ** 0.5 - sqrt_alpha_prod = sqrt_alpha_prod.flatten() - while len(sqrt_alpha_prod.shape) < len(sample.shape): - sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1) - - sqrt_one_minus_alpha_prod = (1 - self.alphas_cumprod[timesteps]) ** 0.5 - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten() - while len(sqrt_one_minus_alpha_prod.shape) < len(sample.shape): - sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1) - - velocity = sqrt_alpha_prod * noise - sqrt_one_minus_alpha_prod * sample - return velocity - - def __len__(self): - return self.config.num_train_timesteps diff --git a/spaces/Jackie2235/QueryExpansionForEtsy/README.md b/spaces/Jackie2235/QueryExpansionForEtsy/README.md deleted file mode 100644 index e79d19dbecc3d19c6031fee69f592039383beb9f..0000000000000000000000000000000000000000 --- a/spaces/Jackie2235/QueryExpansionForEtsy/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: QueryExpansion -emoji: 👁 -colorFrom: pink -colorTo: indigo -sdk: streamlit -sdk_version: 1.22.0 -app_file: app.py -pinned: false -duplicated_from: HarryLee/QueryExpansionForEtsy ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/ops/fused_act/fused_act.py b/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/ops/fused_act/fused_act.py deleted file mode 100644 index 588f815e596ab0fc83ab0f9d21426c22ec5ed7c3..0000000000000000000000000000000000000000 --- a/spaces/Jasonyoyo/CodeFormer/CodeFormer/basicsr/ops/fused_act/fused_act.py +++ /dev/null @@ -1,89 +0,0 @@ -# modify from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/fused_act.py # noqa:E501 - -import torch -from torch import nn -from torch.autograd import Function - -try: - from . import fused_act_ext -except ImportError: - import os - BASICSR_JIT = os.getenv('BASICSR_JIT') - if BASICSR_JIT == 'True': - from torch.utils.cpp_extension import load - module_path = os.path.dirname(__file__) - fused_act_ext = load( - 'fused', - sources=[ - os.path.join(module_path, 'src', 'fused_bias_act.cpp'), - os.path.join(module_path, 'src', 'fused_bias_act_kernel.cu'), - ], - ) - - -class FusedLeakyReLUFunctionBackward(Function): - - @staticmethod - def forward(ctx, grad_output, out, negative_slope, scale): - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - empty = grad_output.new_empty(0) - - grad_input = fused_act_ext.fused_bias_act(grad_output, empty, out, 3, 1, negative_slope, scale) - - dim = [0] - - if grad_input.ndim > 2: - dim += list(range(2, grad_input.ndim)) - - grad_bias = grad_input.sum(dim).detach() - - return grad_input, grad_bias - - @staticmethod - def backward(ctx, gradgrad_input, gradgrad_bias): - out, = ctx.saved_tensors - gradgrad_out = fused_act_ext.fused_bias_act(gradgrad_input, gradgrad_bias, out, 3, 1, ctx.negative_slope, - ctx.scale) - - return gradgrad_out, None, None, None - - -class FusedLeakyReLUFunction(Function): - - @staticmethod - def forward(ctx, input, bias, negative_slope, scale): - empty = input.new_empty(0) - out = fused_act_ext.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale) - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - return out - - @staticmethod - def backward(ctx, grad_output): - out, = ctx.saved_tensors - - grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply(grad_output, out, ctx.negative_slope, ctx.scale) - - return grad_input, grad_bias, None, None - - -class FusedLeakyReLU(nn.Module): - - def __init__(self, channel, negative_slope=0.2, scale=2**0.5): - super().__init__() - - self.bias = nn.Parameter(torch.zeros(channel)) - self.negative_slope = negative_slope - self.scale = scale - - def forward(self, input): - return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale) - - -def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2**0.5): - return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale) diff --git a/spaces/JohanDL/GPT4Readability/README.md b/spaces/JohanDL/GPT4Readability/README.md deleted file mode 100644 index dec9e4544533ed74dfedd5dbb89491f68b4d5078..0000000000000000000000000000000000000000 --- a/spaces/JohanDL/GPT4Readability/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: GPT4Readability -emoji: 📊 -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Kangarroar/ApplioRVC-Inference/lib/uvr5_pack/lib_v5/layers_123812KB .py b/spaces/Kangarroar/ApplioRVC-Inference/lib/uvr5_pack/lib_v5/layers_123812KB .py deleted file mode 100644 index b82f06bb4993cd63f076e68d7e24185269b1bc42..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/lib/uvr5_pack/lib_v5/layers_123812KB .py +++ /dev/null @@ -1,118 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from . import spec_utils - - -class Conv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(Conv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nout, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - bias=False, - ), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class SeperableConv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(SeperableConv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nin, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - groups=nin, - bias=False, - ), - nn.Conv2d(nin, nout, kernel_size=1, bias=False), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class Encoder(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU): - super(Encoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ) - - def __call__(self, x): - skip = self.conv1(x) - h = self.conv2(skip) - - return h, skip - - -class Decoder(nn.Module): - def __init__( - self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False - ): - super(Decoder, self).__init__() - self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def __call__(self, x, skip=None): - x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) - if skip is not None: - skip = spec_utils.crop_center(skip, x) - x = torch.cat([x, skip], dim=1) - h = self.conv(x) - - if self.dropout is not None: - h = self.dropout(h) - - return h - - -class ASPPModule(nn.Module): - def __init__(self, nin, nout, dilations=(4, 8, 16), activ=nn.ReLU): - super(ASPPModule, self).__init__() - self.conv1 = nn.Sequential( - nn.AdaptiveAvgPool2d((1, None)), - Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ), - ) - self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ) - self.conv3 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[0], dilations[0], activ=activ - ) - self.conv4 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[1], dilations[1], activ=activ - ) - self.conv5 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.bottleneck = nn.Sequential( - Conv2DBNActiv(nin * 5, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1) - ) - - def forward(self, x): - _, _, h, w = x.size() - feat1 = F.interpolate( - self.conv1(x), size=(h, w), mode="bilinear", align_corners=True - ) - feat2 = self.conv2(x) - feat3 = self.conv3(x) - feat4 = self.conv4(x) - feat5 = self.conv5(x) - out = torch.cat((feat1, feat2, feat3, feat4, feat5), dim=1) - bottle = self.bottleneck(out) - return bottle diff --git a/spaces/KevinQHLin/UniVTG/main/train_qfvs.py b/spaces/KevinQHLin/UniVTG/main/train_qfvs.py deleted file mode 100644 index 65a3b155ed6d432da7eb0072f87e1bae18d8a994..0000000000000000000000000000000000000000 --- a/spaces/KevinQHLin/UniVTG/main/train_qfvs.py +++ /dev/null @@ -1,325 +0,0 @@ -import os -import pdb -import time -import json -import pprint -import random -import importlib -import numpy as np -from tqdm import tqdm, trange -from collections import defaultdict - -import h5py -import torch -import torch.nn as nn -import torch.backends.cudnn as cudnn -from torch.utils.data import DataLoader -from torch.utils.tensorboard import SummaryWriter - -import sys -sys.path.append('/Users/kevin/univtg') -from main.config import BaseOptions, setup_model -from main.dataset_qfvs import DatasetQFVS, prepare_batch_inputs_qfvs, start_end_collate_qfvs -from utils.basic_utils import set_seed, AverageMeter, dict_to_markdown, save_json, save_jsonl, load_json, load_pickle, l2_normalize_np_array -from utils.model_utils import count_parameters -from eval.qfvs import calculate_semantic_matching, load_videos_tag - -import logging -logger = logging.getLogger(__name__) -logging.basicConfig(format="%(asctime)s.%(msecs)03d:%(levelname)s:%(name)s - %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=logging.INFO) - -def eval_epoch(model, config, opt): - model.eval() - f1_sum = 0; p_sum = 0; r_sum = 0 - - assert len(config['test_videos']) == 1 - video_id = config['test_videos'][0] - embedding = load_pickle(f"./data/qfvs/txt_clip/{config['txt_feature']}.pkl") - - feat_type = config['vid_feature'] - feat = h5py.File(f'./data/qfvs/processed/P0{video_id}_{feat_type}.h5', 'r') - features = torch.from_numpy(feat['features'][()]) - seg_len = torch.from_numpy(feat['seg_len'][()]) - # seg_len = torch.tensor(feat['seg_len'][()]).unsqueeze(0).cuda() - - # dim = features.shape[-1] - # ctx_l = seg_len.sum().cpu() - - # dim = features.shape[-1] - # ctx_l = features.shape[1] - # seg_len = torch.ones(ctx_l) - # features = features.reshape(-1, dim)[:ctx_l] - - # tef_st = torch.arange(0, ctx_l, 1.0) / ctx_l - # tef_ed = tef_st + 1.0 / ctx_l - # tef = torch.stack([tef_st, tef_ed], dim=1).cuda() # (Lv, 2) - # features = torch.cat([features, tef], dim=1) # (Lv, Dv+2) - - transfer = {"Cupglass": "Glass", - "Musicalinstrument": "Instrument", - "Petsanimal": "Animal"} - - for _,_,files in os.walk("./data/qfvs/metadata/origin_data/Query-Focused_Summaries/Oracle_Summaries/P0"+str(video_id)): - evaluation_num=len(files) - - mask_GT = torch.zeros(config["max_segment_num"], config["max_frame_num"], dtype=torch.bool).cuda() - for j in range(len(seg_len)): - for k in range(seg_len[j]): - mask_GT[j][k] = 1 - - for file in files: - summaries_GT=[] - with open("./data/qfvs/metadata/origin_data/Query-Focused_Summaries/Oracle_Summaries/P0"+str(video_id)+"/"+file,"r") as f: - for line in f.readlines(): - summaries_GT.append(int(line.strip())) - - concept1, concept2 = file.split('_')[0:2] - - ############## - if concept1 in transfer: - concept1 = transfer[concept1] - if concept2 in transfer: - concept2 = transfer[concept2] - concept1 = embedding[concept1] - concept2 = embedding[concept2] - - concept1 = l2_normalize_np_array(concept1) - concept2 = l2_normalize_np_array(concept2) - - data = { - 'features':features, - 'seg_len': seg_len, - 'tokens_pad1':torch.from_numpy(concept1), - 'tokens_pad2':torch.from_numpy(concept2), - 'mask_GT': mask_GT - } - - input1, input2, input_oracle, mask = prepare_batch_inputs_qfvs(start_end_collate_qfvs([data]), config, eval=True) - - summaries_GT = [x - 1 for x in summaries_GT] - video_shots_tag = load_videos_tag(mat_path="./eval/Tags.mat") - - if opt.f_loss_coef == 0: - output_type = 'saliency_scores' - elif opt.s_loss_intra_coef == 0: - output_type = 'pred_logits' - else: - if config['qfvs_score_ensemble'] > 0: - output_type = ['pred_logits', 'saliency_scores'] - else: - output_type = 'pred_logits' - - with torch.no_grad(): - if not isinstance(output_type, list): - score1 = model(**input1)[output_type].squeeze() - score1 = score1.masked_select(mask_GT) - - score2 = model(**input2)[output_type].squeeze() - score2 = score2.masked_select(mask_GT) - - score = model(**input_oracle)[output_type].squeeze() - score = score.masked_select(mask_GT) - else: - score1, score2, score = torch.zeros((int(mask.sum().item()))).cuda(), torch.zeros((int(mask.sum().item()))).cuda(), torch.zeros((int(mask.sum().item()))).cuda() - for output_t in output_type: - score1 += model(**input1)[output_t].squeeze().masked_select(mask_GT) - score2 += model(**input2)[output_t].squeeze().masked_select(mask_GT) - score += model(**input_oracle)[output_t].squeeze().masked_select(mask_GT) - - if config['qfvs_score_gather'] > 0: - score = score + score1 + score2 - else: - score = score - - # since video4 features dim is greater than video_shots_tag. - score = score[:min(score.shape[0], video_shots_tag[video_id-1].shape[0])] - _, top_index = score.topk(int(score.shape[0] * config["top_percent"])) - - p, r, f1 = calculate_semantic_matching(list(top_index.cpu().numpy()), summaries_GT, video_shots_tag, video_id=video_id-1) - f1_sum+=f1; r_sum+=r; p_sum+=p - - return {'F': round(100* f1_sum/evaluation_num,2) , - 'R': round(100* r_sum/evaluation_num,2) , - 'P': round(100* p_sum/evaluation_num,2) } - -def idx2time(idx): - sec1, sec2 = idx*5, (idx+1)*5 - - h1 = sec1 // 3600 - m1 = (sec1 - h1*3600) // 60 - s1 = sec1 % 60 - - h2 = sec2 // 3600 - m2 = (sec2 - h2*3600) // 60 - s2 = sec2 % 60 - print(h1,m1,s1,'\t', h2,m2,s2) - -def train_epoch(model, criterion, train_loader, optimizer, opt, config, epoch_i, tb_writer): - model.train() - criterion.train() - - # init meters - time_meters = defaultdict(AverageMeter) - loss_meters = defaultdict(AverageMeter) - - timer_dataloading = time.time() - loss_total = 0 - - for batch_idx, batch in enumerate(tqdm(train_loader)): - time_meters["dataloading_time"].update(time.time() - timer_dataloading) - timer_start = time.time() - model_input1, model_input2, model_input_oracle, \ - model_gt1, model_gt2, model_gt_oracle, \ - mask_GT = prepare_batch_inputs_qfvs(batch, config) - time_meters["prepare_inputs_time"].update(time.time() - timer_start) - - timer_start = time.time() - output1 = model(**model_input1) - output2 = model(**model_input2) - output_oracle = model(**model_input_oracle) - - loss_dict = {} - loss_dict1 = criterion(output1, model_gt1, mask_GT) - loss_dict2 = criterion(output2, model_gt2, mask_GT) - loss_dict3 = criterion(output_oracle, model_gt_oracle, mask_GT) - - weight_dict = criterion.weight_dict - if config['qfvs_loss_gather'] > 0: - for k in loss_dict1.keys(): - loss_dict[k] = loss_dict1[k] + loss_dict2[k] + loss_dict3[k] - else: - loss_dict = loss_dict3 - - losses = sum(loss_dict[k] * weight_dict[k] for k in loss_dict.keys() if k in weight_dict) - loss_total += losses.item() - - time_meters["model_forward_time"].update(time.time() - timer_start) - timer_start = time.time() - optimizer.zero_grad() - losses.backward() - if opt.grad_clip > 0: - nn.utils.clip_grad_norm_(model.parameters(), opt.grad_clip) - optimizer.step() - time_meters["model_backward_time"].update(time.time() - timer_start) - - timer_dataloading = time.time() - return round(loss_total / len(train_loader), 2) - -# train in single domain. -def train(model, criterion, optimizer, lr_scheduler, train_loader, opt, config): - # if opt.device.type == "cuda": - # logger.info("CUDA enabled.") - # model.to(opt.device) - - tb_writer = SummaryWriter(opt.tensorboard_log_dir) - tb_writer.add_text("hyperparameters", dict_to_markdown(vars(opt), max_str_len=None)) - opt.train_log_txt_formatter = "{time_str} [Epoch] {epoch:03d} [Loss] {loss_str}\n" - opt.eval_log_txt_formatter = "{time_str} [Epoch] {epoch:03d} [Loss] {loss_str} [Metrics] {eval_metrics_str}\n" - - prev_best_score = {'Fscore':0, 'Precision':0, 'Recall':0} - if opt.start_epoch is None: - start_epoch = -1 if opt.eval_init else 0 - else: - start_epoch = opt.start_epoch - - val_score = eval_epoch(model, config, opt) - tb_writer.add_scalar(f"Eval/QFVS-V{config['test_videos'][0]}-fscore", float(val_score['F']), 0) - logger.info(f"[Epoch {0}] [Fscore: {val_score['F']} / {prev_best_score['Fscore']}]" - f" [Precision: {val_score['P']} / {prev_best_score['Precision']}]" - f" [Recall: {val_score['R']} / {prev_best_score['Recall']}]") - for epoch_i in trange(start_epoch, opt.n_epoch, desc="Epoch"): - if epoch_i > -1: - loss_epoch = train_epoch(model, criterion, train_loader, optimizer, opt, config, epoch_i, tb_writer) - lr_scheduler.step() - eval_epoch_interval = opt.eval_epoch - if opt.eval_path is not None and (epoch_i + 1) % eval_epoch_interval == 0: - with torch.no_grad(): - val_score = eval_epoch(model, config, opt) - tb_writer.add_scalar(f"Eval/QFVS-V{config['test_videos'][0]}-fscore", float(val_score['F']), epoch_i+1) - logger.info(f"[Epoch {epoch_i + 1}, Loss {loss_epoch}] [Fscore: {val_score['F']} / {prev_best_score['Fscore']}]" - f" [Precision: {val_score['P']} / {prev_best_score['Precision']}]" - f" [Recall: {val_score['R']} / {prev_best_score['Recall']}]") - - if prev_best_score['Fscore'] < val_score['F']: - prev_best_score['Fscore'] = val_score['F'] - prev_best_score['Precision'] = val_score['P'] - prev_best_score['Recall'] = val_score['R'] - - checkpoint = { - "model": model.state_dict(), - "optimizer": optimizer.state_dict(), - "epoch": epoch_i, - "opt": opt - } - torch.save(checkpoint, opt.ckpt_filepath.replace(".ckpt", f"_V{config['test_videos'][0]}_best.ckpt")) - tb_writer.close() - return prev_best_score - -def update_config(opt, config): - # for key in ["max_segment_num", "max_frame_num", "top_percent", - # "qfvs_vid_feature", "qfvs_txt_feature", "qfvs_dense_shot", - # "qfvs_score_ensemble", "qfvs_score_gather", "qfvs_loss_gather"]: - config["max_segment_num"] = opt.max_segment_num - config["max_frame_num"] = opt.max_frame_num - config["top_percent"] = opt.top_percent - config["vid_feature"] = opt.qfvs_vid_feature - config["txt_feature"] = opt.qfvs_txt_feature - config["qfvs_dense_shot"] = opt.qfvs_dense_shot - config["qfvs_score_ensemble"] = opt.qfvs_score_ensemble - config["qfvs_score_gather"] = opt.qfvs_score_gather - config["qfvs_loss_gather"] = opt.qfvs_loss_gather - return config - -def start_training(): - logger.info("Setup config, data and model...") - opt = BaseOptions().parse() - set_seed(opt.seed) - - # config = load_json("./main/config_qfvs.json") - config = {} - config = update_config(opt, config) - - tb_writer = SummaryWriter(opt.tensorboard_log_dir) - - # key -> test video; value -> training videos. - qfvs_split = { - 1: [2, 3, 4], - 2: [1, 3, 4], - 3: [1, 2, 4], - 4: [1, 2, 3] - } - - scores_videos = {} - for test_id, splits in qfvs_split.items(): - logger.info(f"Start Training {opt.dset_name}: {test_id}") - config['train_videos'] = qfvs_split[test_id] - config['test_videos'] = [test_id] - train_dataset = DatasetQFVS(config) - train_loader = DataLoader(train_dataset, batch_size=opt.bsz, collate_fn=start_end_collate_qfvs, shuffle=True, num_workers=opt.num_workers) - - model, criterion, optimizer, lr_scheduler = setup_model(opt) - count_parameters(model) - best_score = train(model, criterion, optimizer, lr_scheduler, train_loader, opt, config) - scores_videos['V'+str(test_id)] = best_score - - # save the final results. - avg_fscore = sum([v['Fscore'] for k, v in scores_videos.items()]) / len(scores_videos) - avg_precision = sum([v['Precision'] for k, v in scores_videos.items()]) / len(scores_videos) - avg_recall = sum([v['Recall'] for k, v in scores_videos.items()]) / len(scores_videos) - scores_videos['avg'] = {'Fscore':avg_fscore, 'Precision':avg_precision, 'Recall':avg_recall} - - save_metrics_path = os.path.join(opt.results_dir, f"best_{opt.dset_name}_{opt.eval_split_name}_preds_metrics.json") - save_json( scores_videos, save_metrics_path, save_pretty=True, sort_keys=False) - - tb_writer.add_scalar(f"Eval/QFVS-avg-fscore", round(avg_fscore, 2), 1) - tb_writer.add_text(f"Eval/QFVS-{opt.dset_name}", dict_to_markdown(scores_videos, max_str_len=None)) - tb_writer.close() - - print(scores_videos) - return - -if __name__ == '__main__': - start_training() - results = logger.info("\n\n\nFINISHED TRAINING!!!") diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/coders/__init__.py b/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/coders/__init__.py deleted file mode 100644 index e12fd64e12b5e76a014da9bd724f1b6f50b488c4..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/coders/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base_bbox_coder import BaseBBoxCoder -from .bucketing_bbox_coder import BucketingBBoxCoder -from .delta_xywh_bbox_coder import DeltaXYWHBBoxCoder -from .distance_point_bbox_coder import DistancePointBBoxCoder -from .legacy_delta_xywh_bbox_coder import LegacyDeltaXYWHBBoxCoder -from .pseudo_bbox_coder import PseudoBBoxCoder -from .tblr_bbox_coder import TBLRBBoxCoder -from .yolo_bbox_coder import YOLOBBoxCoder - -__all__ = [ - 'BaseBBoxCoder', 'PseudoBBoxCoder', 'DeltaXYWHBBoxCoder', - 'LegacyDeltaXYWHBBoxCoder', 'TBLRBBoxCoder', 'YOLOBBoxCoder', - 'BucketingBBoxCoder', 'DistancePointBBoxCoder' -] diff --git a/spaces/LanguageBind/LanguageBind/scripts/depth_language/train.sh b/spaces/LanguageBind/LanguageBind/scripts/depth_language/train.sh deleted file mode 100644 index 152b34b0f3de12c7191c394618625eae97be7948..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/scripts/depth_language/train.sh +++ /dev/null @@ -1,25 +0,0 @@ - -CACHE_DIR="path/to/pretrained/weight" -TRAIN_DATA="path/to/data" -# this script is for 1024 total batch_size (n(8) GPUs * batch_size(128) * accum_freq(1)) -cd /path/to/LanguageBind -TORCH_DISTRIBUTED_DEBUG=DETAIL HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 torchrun --nnodes=$HOST_NUM --node_rank=$INDEX --nproc_per_node $HOST_GPU_NUM --master_addr $CHIEF_IP \ - -m main \ - --train-data ${TRAIN_DATA} \ - --train-num-samples 3020000 \ - --clip-type "dl" --max-depth 10 \ - --do_train \ - --lock-text --lock-image --text-type "polish_mplug" \ - --init-temp 0.07 --learn-temp \ - --model "ViT-L-14" --cache-dir ${CACHE_DIR} \ - --convert_to_lora --lora_r 2 \ - --lr 5e-4 --coef-lr 1e-3 \ - --beta1 0.9 --beta2 0.98 --wd 0.2 --eps 1e-6 \ - --num-frames 1 --force-patch-dropout 0.5 \ - --epochs 1 --batch-size 128 --accum-freq 1 --warmup 200 \ - --precision "amp" --workers 10 --video-decode-backend "imgs" \ - --save-frequency 1 --log-every-n-steps 20 --report-to "tensorboard" --resume "latest" \ - --do_eval \ - --val_d_cls_data "NYUV2" - - diff --git a/spaces/Lightxr/sd-diffusers-webui/Dockerfile b/spaces/Lightxr/sd-diffusers-webui/Dockerfile deleted file mode 100644 index 1c566db9b07c5fae065dad8970c497be9bc1b407..0000000000000000000000000000000000000000 --- a/spaces/Lightxr/sd-diffusers-webui/Dockerfile +++ /dev/null @@ -1,22 +0,0 @@ -# Dockerfile Public T4 - -FROM nvidia/cuda:11.7.1-cudnn8-devel-ubuntu22.04 -ENV DEBIAN_FRONTEND noninteractive - -WORKDIR /content - -RUN apt-get update -y && apt-get upgrade -y && apt-get install -y libgl1 libglib2.0-0 wget git git-lfs python3-pip python-is-python3 && pip3 install --upgrade pip - -RUN pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchsde --extra-index-url https://download.pytorch.org/whl/cu113 -RUN pip install https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.16/xformers-0.0.16+814314d.d20230118-cp310-cp310-linux_x86_64.whl -RUN pip install --pre triton -RUN pip install numexpr einops transformers k_diffusion safetensors gradio diffusers==0.12.1 - -ADD . . -RUN adduser --disabled-password --gecos '' user -RUN chown -R user:user /content -RUN chmod -R 777 /content -USER user - -EXPOSE 7860 -CMD python /content/app.py diff --git a/spaces/Liviox24/LoanEligibilityPrediction/README.md b/spaces/Liviox24/LoanEligibilityPrediction/README.md deleted file mode 100644 index 234b8065bba17206b179c8a65b1ce2ff2b3d6323..0000000000000000000000000000000000000000 --- a/spaces/Liviox24/LoanEligibilityPrediction/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: LoanEligibilityPrediction -emoji: 📚 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.0.22 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Mahiruoshi/lovelive-ShojoKageki-vits/utils.py b/spaces/Mahiruoshi/lovelive-ShojoKageki-vits/utils.py deleted file mode 100644 index a870556e805c8cba7dc0540c686710a6a62819c4..0000000000000000000000000000000000000000 --- a/spaces/Mahiruoshi/lovelive-ShojoKageki-vits/utils.py +++ /dev/null @@ -1,307 +0,0 @@ -import argparse -import glob -import json -import logging -import os -import subprocess -import sys - -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.INFO) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except Exception as e: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, - checkpoint_path): - logger.info( - "Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save( - { - 'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict(), - 'learning_rate': learning_rate - }, checkpoint_path) - - -def summarize( - writer, - global_step, - scalars={}, # noqa - histograms={}, # noqa - images={}, # noqa - audios={}, # noqa - audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, - aspect="auto", - origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3, )) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), - aspect='auto', - origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3, )) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', - '--config', - type=str, - default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', - '--model', - type=str, - required=True, - help='Model name') - parser.add_argument('--train_data', - type=str, - required=True, - help='train data') - parser.add_argument('--val_data', type=str, required=True, help='val data') - parser.add_argument('--phone_table', - type=str, - required=True, - help='phone table') - parser.add_argument('--speaker_table', - type=str, - default=None, - help='speaker table, required for multiple speakers') - - args = parser.parse_args() - model_dir = args.model - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r", encoding='utf8') as f: - data = f.read() - with open(config_save_path, "w", encoding='utf8') as f: - f.write(data) - else: - with open(config_save_path, "r", encoding='utf8') as f: - data = f.read() - config = json.loads(data) - config['data']['training_files'] = args.train_data - config['data']['validation_files'] = args.val_data - config['data']['phone_table'] = args.phone_table - # 0 is kept for blank - config['data']['num_phones'] = len(open(args.phone_table).readlines()) + 1 - if args.speaker_table is not None: - config['data']['speaker_table'] = args.speaker_table - # 0 is kept for unknown speaker - config['data']['n_speakers'] = len( - open(args.speaker_table).readlines()) + 1 - else: - config['data']['n_speakers'] = 0 - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn('''{} is not a git repository, therefore hash value - comparison will be ignored.'''.format(source_dir)) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn( - "git hash values are different. {}(saved) != {}(current)". - format(saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.INFO) - - formatter = logging.Formatter( - "%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.INFO) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/Masutxrxd/Masutxrxd/Dockerfile b/spaces/Masutxrxd/Masutxrxd/Dockerfile deleted file mode 100644 index eef259fa372a804549fb0af0913718a13344da34..0000000000000000000000000000000000000000 --- a/spaces/Masutxrxd/Masutxrxd/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] diff --git a/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/apps/render_data.py b/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/apps/render_data.py deleted file mode 100644 index 563c03fba6e304eced73ca283152a968a65c3b8e..0000000000000000000000000000000000000000 --- a/spaces/Mileena/PIFu-Clothed-Human-Digitization/PIFu/apps/render_data.py +++ /dev/null @@ -1,290 +0,0 @@ -#from data.config import raw_dataset, render_dataset, archive_dataset, model_list, zip_path - -from lib.renderer.camera import Camera -import numpy as np -from lib.renderer.mesh import load_obj_mesh, compute_tangent, compute_normal, load_obj_mesh_mtl -from lib.renderer.camera import Camera -import os -import cv2 -import time -import math -import random -import pyexr -import argparse -from tqdm import tqdm - - -def make_rotate(rx, ry, rz): - sinX = np.sin(rx) - sinY = np.sin(ry) - sinZ = np.sin(rz) - - cosX = np.cos(rx) - cosY = np.cos(ry) - cosZ = np.cos(rz) - - Rx = np.zeros((3,3)) - Rx[0, 0] = 1.0 - Rx[1, 1] = cosX - Rx[1, 2] = -sinX - Rx[2, 1] = sinX - Rx[2, 2] = cosX - - Ry = np.zeros((3,3)) - Ry[0, 0] = cosY - Ry[0, 2] = sinY - Ry[1, 1] = 1.0 - Ry[2, 0] = -sinY - Ry[2, 2] = cosY - - Rz = np.zeros((3,3)) - Rz[0, 0] = cosZ - Rz[0, 1] = -sinZ - Rz[1, 0] = sinZ - Rz[1, 1] = cosZ - Rz[2, 2] = 1.0 - - R = np.matmul(np.matmul(Rz,Ry),Rx) - return R - -def rotateSH(SH, R): - SHn = SH - - # 1st order - SHn[1] = R[1,1]*SH[1] - R[1,2]*SH[2] + R[1,0]*SH[3] - SHn[2] = -R[2,1]*SH[1] + R[2,2]*SH[2] - R[2,0]*SH[3] - SHn[3] = R[0,1]*SH[1] - R[0,2]*SH[2] + R[0,0]*SH[3] - - # 2nd order - SHn[4:,0] = rotateBand2(SH[4:,0],R) - SHn[4:,1] = rotateBand2(SH[4:,1],R) - SHn[4:,2] = rotateBand2(SH[4:,2],R) - - return SHn - -def rotateBand2(x, R): - s_c3 = 0.94617469575 - s_c4 = -0.31539156525 - s_c5 = 0.54627421529 - - s_c_scale = 1.0/0.91529123286551084 - s_c_scale_inv = 0.91529123286551084 - - s_rc2 = 1.5853309190550713*s_c_scale - s_c4_div_c3 = s_c4/s_c3 - s_c4_div_c3_x2 = (s_c4/s_c3)*2.0 - - s_scale_dst2 = s_c3 * s_c_scale_inv - s_scale_dst4 = s_c5 * s_c_scale_inv - - sh0 = x[3] + x[4] + x[4] - x[1] - sh1 = x[0] + s_rc2*x[2] + x[3] + x[4] - sh2 = x[0] - sh3 = -x[3] - sh4 = -x[1] - - r2x = R[0][0] + R[0][1] - r2y = R[1][0] + R[1][1] - r2z = R[2][0] + R[2][1] - - r3x = R[0][0] + R[0][2] - r3y = R[1][0] + R[1][2] - r3z = R[2][0] + R[2][2] - - r4x = R[0][1] + R[0][2] - r4y = R[1][1] + R[1][2] - r4z = R[2][1] + R[2][2] - - sh0_x = sh0 * R[0][0] - sh0_y = sh0 * R[1][0] - d0 = sh0_x * R[1][0] - d1 = sh0_y * R[2][0] - d2 = sh0 * (R[2][0] * R[2][0] + s_c4_div_c3) - d3 = sh0_x * R[2][0] - d4 = sh0_x * R[0][0] - sh0_y * R[1][0] - - sh1_x = sh1 * R[0][2] - sh1_y = sh1 * R[1][2] - d0 += sh1_x * R[1][2] - d1 += sh1_y * R[2][2] - d2 += sh1 * (R[2][2] * R[2][2] + s_c4_div_c3) - d3 += sh1_x * R[2][2] - d4 += sh1_x * R[0][2] - sh1_y * R[1][2] - - sh2_x = sh2 * r2x - sh2_y = sh2 * r2y - d0 += sh2_x * r2y - d1 += sh2_y * r2z - d2 += sh2 * (r2z * r2z + s_c4_div_c3_x2) - d3 += sh2_x * r2z - d4 += sh2_x * r2x - sh2_y * r2y - - sh3_x = sh3 * r3x - sh3_y = sh3 * r3y - d0 += sh3_x * r3y - d1 += sh3_y * r3z - d2 += sh3 * (r3z * r3z + s_c4_div_c3_x2) - d3 += sh3_x * r3z - d4 += sh3_x * r3x - sh3_y * r3y - - sh4_x = sh4 * r4x - sh4_y = sh4 * r4y - d0 += sh4_x * r4y - d1 += sh4_y * r4z - d2 += sh4 * (r4z * r4z + s_c4_div_c3_x2) - d3 += sh4_x * r4z - d4 += sh4_x * r4x - sh4_y * r4y - - dst = x - dst[0] = d0 - dst[1] = -d1 - dst[2] = d2 * s_scale_dst2 - dst[3] = -d3 - dst[4] = d4 * s_scale_dst4 - - return dst - -def render_prt_ortho(out_path, folder_name, subject_name, shs, rndr, rndr_uv, im_size, angl_step=4, n_light=1, pitch=[0]): - cam = Camera(width=im_size, height=im_size) - cam.ortho_ratio = 0.4 * (512 / im_size) - cam.near = -100 - cam.far = 100 - cam.sanity_check() - - # set path for obj, prt - mesh_file = os.path.join(folder_name, subject_name + '_100k.obj') - if not os.path.exists(mesh_file): - print('ERROR: obj file does not exist!!', mesh_file) - return - prt_file = os.path.join(folder_name, 'bounce', 'bounce0.txt') - if not os.path.exists(prt_file): - print('ERROR: prt file does not exist!!!', prt_file) - return - face_prt_file = os.path.join(folder_name, 'bounce', 'face.npy') - if not os.path.exists(face_prt_file): - print('ERROR: face prt file does not exist!!!', prt_file) - return - text_file = os.path.join(folder_name, 'tex', subject_name + '_dif_2k.jpg') - if not os.path.exists(text_file): - print('ERROR: dif file does not exist!!', text_file) - return - - texture_image = cv2.imread(text_file) - texture_image = cv2.cvtColor(texture_image, cv2.COLOR_BGR2RGB) - - vertices, faces, normals, faces_normals, textures, face_textures = load_obj_mesh(mesh_file, with_normal=True, with_texture=True) - vmin = vertices.min(0) - vmax = vertices.max(0) - up_axis = 1 if (vmax-vmin).argmax() == 1 else 2 - - vmed = np.median(vertices, 0) - vmed[up_axis] = 0.5*(vmax[up_axis]+vmin[up_axis]) - y_scale = 180/(vmax[up_axis] - vmin[up_axis]) - - rndr.set_norm_mat(y_scale, vmed) - rndr_uv.set_norm_mat(y_scale, vmed) - - tan, bitan = compute_tangent(vertices, faces, normals, textures, face_textures) - prt = np.loadtxt(prt_file) - face_prt = np.load(face_prt_file) - rndr.set_mesh(vertices, faces, normals, faces_normals, textures, face_textures, prt, face_prt, tan, bitan) - rndr.set_albedo(texture_image) - - rndr_uv.set_mesh(vertices, faces, normals, faces_normals, textures, face_textures, prt, face_prt, tan, bitan) - rndr_uv.set_albedo(texture_image) - - os.makedirs(os.path.join(out_path, 'GEO', 'OBJ', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'PARAM', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'RENDER', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'MASK', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'UV_RENDER', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'UV_MASK', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'UV_POS', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'UV_NORMAL', subject_name),exist_ok=True) - - if not os.path.exists(os.path.join(out_path, 'val.txt')): - f = open(os.path.join(out_path, 'val.txt'), 'w') - f.close() - - # copy obj file - cmd = 'cp %s %s' % (mesh_file, os.path.join(out_path, 'GEO', 'OBJ', subject_name)) - print(cmd) - os.system(cmd) - - for p in pitch: - for y in tqdm(range(0, 360, angl_step)): - R = np.matmul(make_rotate(math.radians(p), 0, 0), make_rotate(0, math.radians(y), 0)) - if up_axis == 2: - R = np.matmul(R, make_rotate(math.radians(90),0,0)) - - rndr.rot_matrix = R - rndr_uv.rot_matrix = R - rndr.set_camera(cam) - rndr_uv.set_camera(cam) - - for j in range(n_light): - sh_id = random.randint(0,shs.shape[0]-1) - sh = shs[sh_id] - sh_angle = 0.2*np.pi*(random.random()-0.5) - sh = rotateSH(sh, make_rotate(0, sh_angle, 0).T) - - dic = {'sh': sh, 'ortho_ratio': cam.ortho_ratio, 'scale': y_scale, 'center': vmed, 'R': R} - - rndr.set_sh(sh) - rndr.analytic = False - rndr.use_inverse_depth = False - rndr.display() - - out_all_f = rndr.get_color(0) - out_mask = out_all_f[:,:,3] - out_all_f = cv2.cvtColor(out_all_f, cv2.COLOR_RGBA2BGR) - - np.save(os.path.join(out_path, 'PARAM', subject_name, '%d_%d_%02d.npy'%(y,p,j)),dic) - cv2.imwrite(os.path.join(out_path, 'RENDER', subject_name, '%d_%d_%02d.jpg'%(y,p,j)),255.0*out_all_f) - cv2.imwrite(os.path.join(out_path, 'MASK', subject_name, '%d_%d_%02d.png'%(y,p,j)),255.0*out_mask) - - rndr_uv.set_sh(sh) - rndr_uv.analytic = False - rndr_uv.use_inverse_depth = False - rndr_uv.display() - - uv_color = rndr_uv.get_color(0) - uv_color = cv2.cvtColor(uv_color, cv2.COLOR_RGBA2BGR) - cv2.imwrite(os.path.join(out_path, 'UV_RENDER', subject_name, '%d_%d_%02d.jpg'%(y,p,j)),255.0*uv_color) - - if y == 0 and j == 0 and p == pitch[0]: - uv_pos = rndr_uv.get_color(1) - uv_mask = uv_pos[:,:,3] - cv2.imwrite(os.path.join(out_path, 'UV_MASK', subject_name, '00.png'),255.0*uv_mask) - - data = {'default': uv_pos[:,:,:3]} # default is a reserved name - pyexr.write(os.path.join(out_path, 'UV_POS', subject_name, '00.exr'), data) - - uv_nml = rndr_uv.get_color(2) - uv_nml = cv2.cvtColor(uv_nml, cv2.COLOR_RGBA2BGR) - cv2.imwrite(os.path.join(out_path, 'UV_NORMAL', subject_name, '00.png'),255.0*uv_nml) - - -if __name__ == '__main__': - shs = np.load('./env_sh.npy') - - parser = argparse.ArgumentParser() - parser.add_argument('-i', '--input', type=str, default='/home/shunsuke/Downloads/rp_dennis_posed_004_OBJ') - parser.add_argument('-o', '--out_dir', type=str, default='/home/shunsuke/Documents/hf_human') - parser.add_argument('-m', '--ms_rate', type=int, default=1, help='higher ms rate results in less aliased output. MESA renderer only supports ms_rate=1.') - parser.add_argument('-e', '--egl', action='store_true', help='egl rendering option. use this when rendering with headless server with NVIDIA GPU') - parser.add_argument('-s', '--size', type=int, default=512, help='rendering image size') - args = parser.parse_args() - - # NOTE: GL context has to be created before any other OpenGL function loads. - from lib.renderer.gl.init_gl import initialize_GL_context - initialize_GL_context(width=args.size, height=args.size, egl=args.egl) - - from lib.renderer.gl.prt_render import PRTRender - rndr = PRTRender(width=args.size, height=args.size, ms_rate=args.ms_rate, egl=args.egl) - rndr_uv = PRTRender(width=args.size, height=args.size, uv_mode=True, egl=args.egl) - - if args.input[-1] == '/': - args.input = args.input[:-1] - subject_name = args.input.split('/')[-1][:-4] - render_prt_ortho(args.out_dir, args.input, subject_name, shs, rndr, rndr_uv, args.size, 1, 1, pitch=[0]) \ No newline at end of file diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/preparers/parsers/__init__.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/preparers/parsers/__init__.py deleted file mode 100644 index fd37947107eba2d2cd54630f5d44360a046d7d32..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/preparers/parsers/__init__.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .base import BaseParser -from .coco_parser import COCOTextDetAnnParser -from .ctw1500_parser import CTW1500AnnParser -from .funsd_parser import FUNSDTextDetAnnParser -from .icdar_txt_parser import (ICDARTxtTextDetAnnParser, - ICDARTxtTextRecogAnnParser) -from .mjsynth_parser import MJSynthAnnParser -from .naf_parser import NAFAnnParser -from .sroie_parser import SROIETextDetAnnParser -from .svt_parser import SVTTextDetAnnParser -from .synthtext_parser import SynthTextAnnParser -from .totaltext_parser import TotaltextTextDetAnnParser -from .wildreceipt_parser import WildreceiptKIEAnnParser - -__all__ = [ - 'BaseParser', 'ICDARTxtTextDetAnnParser', 'ICDARTxtTextRecogAnnParser', - 'TotaltextTextDetAnnParser', 'WildreceiptKIEAnnParser', - 'COCOTextDetAnnParser', 'SVTTextDetAnnParser', 'FUNSDTextDetAnnParser', - 'SROIETextDetAnnParser', 'NAFAnnParser', 'CTW1500AnnParser', - 'SynthTextAnnParser', 'MJSynthAnnParser' -] diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/visualization/textrecog_visualizer.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/visualization/textrecog_visualizer.py deleted file mode 100644 index d2f529b47f40b97d46ffdd73ee467da46e2e92c4..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/visualization/textrecog_visualizer.py +++ /dev/null @@ -1,144 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Dict, Optional, Tuple, Union - -import cv2 -import mmcv -import numpy as np - -from mmocr.registry import VISUALIZERS -from mmocr.structures import TextRecogDataSample -from .base_visualizer import BaseLocalVisualizer - - -@VISUALIZERS.register_module() -class TextRecogLocalVisualizer(BaseLocalVisualizer): - """MMOCR Text Detection Local Visualizer. - - Args: - name (str): Name of the instance. Defaults to 'visualizer'. - image (np.ndarray, optional): The origin image to draw. The format - should be RGB. Defaults to None. - vis_backends (list, optional): Visual backend config list. - Defaults to None. - save_dir (str, optional): Save file dir for all storage backends. - If it is None, the backend storage will not save any data. - gt_color (str or tuple[int, int, int]): Colors of GT text. The tuple of - color should be in RGB order. Or using an abbreviation of color, - such as `'g'` for `'green'`. Defaults to 'g'. - pred_color (str or tuple[int, int, int]): Colors of Predicted text. - The tuple of color should be in RGB order. Or using an abbreviation - of color, such as `'r'` for `'red'`. Defaults to 'r'. - """ - - def __init__(self, - name: str = 'visualizer', - image: Optional[np.ndarray] = None, - vis_backends: Optional[Dict] = None, - save_dir: Optional[str] = None, - gt_color: Optional[Union[str, Tuple[int, int, int]]] = 'g', - pred_color: Optional[Union[str, Tuple[int, int, int]]] = 'r', - **kwargs) -> None: - super().__init__( - name=name, - image=image, - vis_backends=vis_backends, - save_dir=save_dir, - **kwargs) - self.gt_color = gt_color - self.pred_color = pred_color - - def _draw_instances(self, image: np.ndarray, text: str) -> np.ndarray: - """Draw text on image. - - Args: - image (np.ndarray): The image to draw. - text (str): The text to draw. - - Returns: - np.ndarray: The image with text drawn. - """ - height, width = image.shape[:2] - empty_img = np.full_like(image, 255) - self.set_image(empty_img) - font_size = min(0.5 * width / (len(text) + 1), 0.5 * height) - self.draw_texts( - text, - np.array([width / 2, height / 2]), - colors=self.gt_color, - font_sizes=font_size, - vertical_alignments='center', - horizontal_alignments='center', - font_families=self.font_families, - font_properties=self.font_properties) - text_image = self.get_image() - return text_image - - def add_datasample(self, - name: str, - image: np.ndarray, - data_sample: Optional['TextRecogDataSample'] = None, - draw_gt: bool = True, - draw_pred: bool = True, - show: bool = False, - wait_time: int = 0, - pred_score_thr: float = None, - out_file: Optional[str] = None, - step=0) -> None: - """Visualize datasample and save to all backends. - - - If GT and prediction are plotted at the same time, they are - displayed in a stitched image where the left image is the - ground truth and the right image is the prediction. - - If ``show`` is True, all storage backends are ignored, and - the images will be displayed in a local window. - - If ``out_file`` is specified, the drawn image will be - saved to ``out_file``. This is usually used when the display - is not available. - - Args: - name (str): The image title. Defaults to 'image'. - image (np.ndarray): The image to draw. - data_sample (:obj:`TextRecogDataSample`, optional): - TextRecogDataSample which contains gt and prediction. - Defaults to None. - draw_gt (bool): Whether to draw GT TextRecogDataSample. - Defaults to True. - draw_pred (bool): Whether to draw Predicted TextRecogDataSample. - Defaults to True. - show (bool): Whether to display the drawn image. Defaults to False. - wait_time (float): The interval of show (s). Defaults to 0. - out_file (str): Path to output file. Defaults to None. - step (int): Global step value to record. Defaults to 0. - pred_score_thr (float): Threshold of prediction score. It's not - used in this function. Defaults to None. - """ - height, width = image.shape[:2] - resize_height = 64 - resize_width = int(1.0 * width / height * resize_height) - image = cv2.resize(image, (resize_width, resize_height)) - - if image.ndim == 2: - image = cv2.cvtColor(image, cv2.COLOR_GRAY2RGB) - - cat_images = [image] - if (draw_gt and data_sample is not None and 'gt_text' in data_sample - and 'item' in data_sample.gt_text): - gt_text = data_sample.gt_text.item - cat_images.append(self._draw_instances(image, gt_text)) - if (draw_pred and data_sample is not None - and 'pred_text' in data_sample - and 'item' in data_sample.pred_text): - pred_text = data_sample.pred_text.item - cat_images.append(self._draw_instances(image, pred_text)) - cat_images = self._cat_image(cat_images, axis=0) - - if show: - self.show(cat_images, win_name=name, wait_time=wait_time) - else: - self.add_image(name, cat_images, step) - - if out_file is not None: - mmcv.imwrite(cat_images[..., ::-1], out_file) - - self.set_image(cat_images) - return self.get_image() diff --git a/spaces/NATSpeech/PortaSpeech/modules/commons/normalizing_flow/utils.py b/spaces/NATSpeech/PortaSpeech/modules/commons/normalizing_flow/utils.py deleted file mode 100644 index 7eb56ec514bff822ba1a19a6474207ed82492410..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/PortaSpeech/modules/commons/normalizing_flow/utils.py +++ /dev/null @@ -1,29 +0,0 @@ -import torch - - -def squeeze(x, x_mask=None, n_sqz=2): - b, c, t = x.size() - - t = (t // n_sqz) * n_sqz - x = x[:, :, :t] - x_sqz = x.view(b, c, t // n_sqz, n_sqz) - x_sqz = x_sqz.permute(0, 3, 1, 2).contiguous().view(b, c * n_sqz, t // n_sqz) - - if x_mask is not None: - x_mask = x_mask[:, :, n_sqz - 1::n_sqz] - else: - x_mask = torch.ones(b, 1, t // n_sqz).to(device=x.device, dtype=x.dtype) - return x_sqz * x_mask, x_mask - - -def unsqueeze(x, x_mask=None, n_sqz=2): - b, c, t = x.size() - - x_unsqz = x.view(b, n_sqz, c // n_sqz, t) - x_unsqz = x_unsqz.permute(0, 2, 3, 1).contiguous().view(b, c // n_sqz, t * n_sqz) - - if x_mask is not None: - x_mask = x_mask.unsqueeze(-1).repeat(1, 1, 1, n_sqz).view(b, 1, t * n_sqz) - else: - x_mask = torch.ones(b, 1, t * n_sqz).to(device=x.device, dtype=x.dtype) - return x_unsqz * x_mask, x_mask diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/xlnet/xlnet_config.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/xlnet/xlnet_config.py deleted file mode 100644 index 7852eadf469476b4772533dce563366cd3478317..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/xlnet/xlnet_config.py +++ /dev/null @@ -1,181 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Utility functions used in XLNet model.""" - -from __future__ import absolute_import -from __future__ import division -# from __future__ import google_type_annotations -from __future__ import print_function - -import json -import os - -import tensorflow as tf - - -def create_run_config(is_training, is_finetune, flags): - """Helper function for creating RunConfig.""" - kwargs = dict( - is_training=is_training, - use_tpu=flags.use_tpu, - dropout=flags.dropout, - dropout_att=flags.dropout_att, - init_method=flags.init_method, - init_range=flags.init_range, - init_std=flags.init_std, - clamp_len=flags.clamp_len) - - if not is_finetune: - kwargs.update(dict( - mem_len=flags.mem_len, - reuse_len=flags.reuse_len, - bi_data=flags.bi_data, - clamp_len=flags.clamp_len, - same_length=flags.same_length)) - - return RunConfig(**kwargs) - - -# TODO(hongkuny): refactor XLNetConfig and RunConfig. -class XLNetConfig(object): - """Configs for XLNet model. - - XLNetConfig contains hyperparameters that are specific to a model checkpoint; - i.e., these hyperparameters should be the same between - pretraining and finetuning. - - The following hyperparameters are defined: - n_layer: int, the number of layers. - d_model: int, the hidden size. - n_head: int, the number of attention heads. - d_head: int, the dimension size of each attention head. - d_inner: int, the hidden size in feed-forward layers. - ff_activation: str, "relu" or "gelu". - untie_r: bool, whether to untie the biases in attention. - n_token: int, the vocab size. - """ - - def __init__(self, FLAGS=None, json_path=None, args_dict=None): - """Constructing an XLNetConfig. - - One of FLAGS or json_path should be provided. - - Args: - FLAGS: An FLAGS instance. - json_path: A path to a json config file. - args_dict: A dict for args. - """ - - assert FLAGS is not None or json_path is not None or args_dict is not None - - self.keys = ['n_layer', 'd_model', 'n_head', 'd_head', 'd_inner', - 'ff_activation', 'untie_r', 'n_token'] - - if FLAGS is not None: - self.init_from_flags(FLAGS) - - if json_path is not None: - self.init_from_json(json_path) - - if args_dict is not None: - self.init_from_dict(args_dict) - - def init_from_dict(self, args_dict): - """Constructs a `BertConfig` from a Python dictionary of parameters.""" - for key in self.keys: - setattr(self, key, args_dict[key]) - - def init_from_flags(self, flags): - for key in self.keys: - setattr(self, key, getattr(flags, key)) - - def init_from_json(self, json_path): - with tf.io.gfile.GFile(json_path) as f: - json_data = json.load(f) - self.init_from_dict(json_data) - - def to_json(self, json_path): - """Save XLNetConfig to a json file.""" - json_data = {} - for key in self.keys: - json_data[key] = getattr(self, key) - - json_dir = os.path.dirname(json_path) - if not tf.io.gfile.exists(json_dir): - tf.io.gfile.makedirs(json_dir) - with tf.io.gfile.GFile(json_path, 'w') as f: - json.dump(json_data, f, indent=4, sort_keys=True) - - -class RunConfig(object): - """Class of RunConfig. - - RunConfig contains hyperparameters that could be different - between pretraining and finetuning. - These hyperparameters can also be changed from run to run. - We store them separately from XLNetConfig for flexibility. - """ - - def __init__(self, - is_training, - use_tpu, - dropout, - dropout_att, - init_method='normal', - init_range=0.1, - init_std=0.02, - mem_len=None, - reuse_len=None, - bi_data=False, - clamp_len=-1, - same_length=False, - use_cls_mask=True): - """Initializes RunConfig. - - Args: - is_training: bool, whether in training mode. - use_tpu: bool, whether TPUs are used. - dropout: float, dropout rate. - dropout_att: float, dropout rate on attention probabilities. - init_method: str, the initialization scheme, either "normal" or "uniform". - init_range: float, initialize the parameters with a uniform distribution - in [-init_range, init_range]. Only effective when init="uniform". - init_std: float, initialize the parameters with a normal distribution - with mean 0 and stddev init_std. Only effective when init="normal". - mem_len: int, the number of tokens to cache. - reuse_len: int, the number of tokens in the currect batch to be cached - and reused in the future. - bi_data: bool, whether to use bidirectional input pipeline. - Usually set to True during pretraining and False during finetuning. - clamp_len: int, clamp all relative distances larger than clamp_len. - -1 means no clamping. - same_length: bool, whether to use the same attention length - for each token. - use_cls_mask: bool, whether to introduce cls mask. - """ - - self.init_method = init_method - self.init_range = init_range - self.init_std = init_std - self.is_training = is_training - self.dropout = dropout - self.dropout_att = dropout_att - self.use_tpu = use_tpu - self.mem_len = mem_len - self.reuse_len = reuse_len - self.bi_data = bi_data - self.clamp_len = clamp_len - self.same_length = same_length - self.use_cls_mask = use_cls_mask diff --git a/spaces/NCTCMumbai/NCTC/models/official/vision/image_classification/optimizer_factory_test.py b/spaces/NCTCMumbai/NCTC/models/official/vision/image_classification/optimizer_factory_test.py deleted file mode 100644 index c952618c126b4ee18b4a7f0ee87a91cff873a109..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/vision/image_classification/optimizer_factory_test.py +++ /dev/null @@ -1,117 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Tests for optimizer_factory.""" - -from __future__ import absolute_import -from __future__ import division -# from __future__ import google_type_annotations -from __future__ import print_function - -import tensorflow as tf - -from absl.testing import parameterized -from official.vision.image_classification import optimizer_factory -from official.vision.image_classification.configs import base_configs - - -class OptimizerFactoryTest(tf.test.TestCase, parameterized.TestCase): - - @parameterized.named_parameters( - ('sgd', 'sgd', 0., False), - ('momentum', 'momentum', 0., False), - ('rmsprop', 'rmsprop', 0., False), - ('adam', 'adam', 0., False), - ('adamw', 'adamw', 0., False), - ('momentum_lookahead', 'momentum', 0., True), - ('sgd_ema', 'sgd', 0.999, False), - ('momentum_ema', 'momentum', 0.999, False), - ('rmsprop_ema', 'rmsprop', 0.999, False)) - def test_optimizer(self, optimizer_name, moving_average_decay, lookahead): - """Smoke test to be sure no syntax errors.""" - params = { - 'learning_rate': 0.001, - 'rho': 0.09, - 'momentum': 0., - 'epsilon': 1e-07, - 'moving_average_decay': moving_average_decay, - 'lookahead': lookahead, - } - optimizer = optimizer_factory.build_optimizer( - optimizer_name=optimizer_name, - base_learning_rate=params['learning_rate'], - params=params) - self.assertTrue(issubclass(type(optimizer), tf.keras.optimizers.Optimizer)) - - def test_unknown_optimizer(self): - with self.assertRaises(ValueError): - optimizer_factory.build_optimizer( - optimizer_name='this_optimizer_does_not_exist', - base_learning_rate=None, - params=None) - - def test_learning_rate_without_decay_or_warmups(self): - params = base_configs.LearningRateConfig( - name='exponential', - initial_lr=0.01, - decay_rate=0.01, - decay_epochs=None, - warmup_epochs=None, - scale_by_batch_size=0.01, - examples_per_epoch=1, - boundaries=[0], - multipliers=[0, 1]) - batch_size = 1 - train_steps = 1 - - lr = optimizer_factory.build_learning_rate( - params=params, - batch_size=batch_size, - train_steps=train_steps) - self.assertTrue( - issubclass( - type(lr), tf.keras.optimizers.schedules.LearningRateSchedule)) - - @parameterized.named_parameters( - ('exponential', 'exponential'), - ('piecewise_constant_with_warmup', 'piecewise_constant_with_warmup'), - ('cosine_with_warmup', 'cosine_with_warmup')) - def test_learning_rate_with_decay_and_warmup(self, lr_decay_type): - """Basic smoke test for syntax.""" - params = base_configs.LearningRateConfig( - name=lr_decay_type, - initial_lr=0.01, - decay_rate=0.01, - decay_epochs=1, - warmup_epochs=1, - scale_by_batch_size=0.01, - examples_per_epoch=1, - boundaries=[0], - multipliers=[0, 1]) - batch_size = 1 - train_epochs = 1 - train_steps = 1 - - lr = optimizer_factory.build_learning_rate( - params=params, - batch_size=batch_size, - train_epochs=train_epochs, - train_steps=train_steps) - self.assertTrue( - issubclass( - type(lr), tf.keras.optimizers.schedules.LearningRateSchedule)) - - -if __name__ == '__main__': - tf.test.main() diff --git a/spaces/Nephele/bert-vits2-multi-voice/text/english_bert_mock.py b/spaces/Nephele/bert-vits2-multi-voice/text/english_bert_mock.py deleted file mode 100644 index 3b894ced5b6d619a18d6bdd7d7606ba9e6532050..0000000000000000000000000000000000000000 --- a/spaces/Nephele/bert-vits2-multi-voice/text/english_bert_mock.py +++ /dev/null @@ -1,5 +0,0 @@ -import torch - - -def get_bert_feature(norm_text, word2ph): - return torch.zeros(1024, sum(word2ph)) diff --git a/spaces/NickNYU/NickFriendsHouse/app.py b/spaces/NickNYU/NickFriendsHouse/app.py deleted file mode 100644 index 4884fa89b788e3f312045f695bb5d035c04523f3..0000000000000000000000000000000000000000 --- a/spaces/NickNYU/NickFriendsHouse/app.py +++ /dev/null @@ -1,92 +0,0 @@ -import streamlit as st -import os -import pickle -from PyPDF2 import PdfReader -from langchain import FAISS -from langchain.text_splitter import RecursiveCharacterTextSplitter -from langchain.embeddings.openai import OpenAIEmbeddings - -from langchain.vectorstores import Pinecone -import pinecone -from langchain.llms import OpenAI -from langchain.chains.question_answering import load_qa_chain -from langchain.callbacks import get_openai_callback - -# Sidebar contents -with st.sidebar: - st.title('🤗💬 LLM Chat App') - st.markdown(''' - ## About - This app is an LLM-powered chatbot built using: - - [Streamlit](https://streamlit.io/) - - [LangChain](https://python.langchain.com/) - - [OpenAI](https://platform.openai.com/docs/models) LLM model - - ''') - # add_vertical_space(5) - st.write('Made by Nick') - - -def main(): - st.header("Chat with PDF 💬") - - # upload a PDF file - pdf = st.file_uploader("Upload your PDF", type='pdf') - - if pdf is not None: - pdf_reader = PdfReader(pdf) - - text = "" - for page in pdf_reader.pages: - text += page.extract_text() - - text_splitter = RecursiveCharacterTextSplitter( - chunk_size=512, - chunk_overlap=128, - length_function=len - ) - chunks = text_splitter.split_text(text=text) - - # # embeddings - store_name = pdf.name[:-4] - st.write(f'{store_name}') - - if os.path.exists(f"{store_name}.pkl"): - with open(f"{store_name}.pkl", "rb") as f: - VectorStore = pickle.load(f) - st.write('Embeddings Loaded from the Disk') - else: - st.write('Embeddings calculate to the Pinecone') - embeddings = OpenAIEmbeddings() - VectorStore = FAISS.from_texts(chunks, embedding=embeddings) - with open(f"{store_name}.pkl", "wb") as f: - pickle.dump(VectorStore, f) - - # PINECONE_API_KEY = os.environ.get('PINECONE_API_KEY', '894d5f1f-df46-4b01-8407-d9977eaee2eb') - # PINECONE_API_ENV = os.environ.get('PINECONE_API_ENV', - # 'asia-southeast1-gcp-free') # You may need to switch with your env - # # initialize pinecone - # pinecone.init( - # api_key=PINECONE_API_KEY, # find at app.pinecone.io - # environment=PINECONE_API_ENV # next to api key in console - # ) - # index_name = "indexer" # put in the name of your pinecone index here - # VectorStore = Pinecone.from_texts([t.page_content for t in chunks], embeddings, index_name=index_name) - - # Accept user questions/query - query = st.text_input("Ask questions about your PDF file:") - # st.write(query) - - if query: - docs = VectorStore.similarity_search(query=query, k=3) - - llm = OpenAI() - chain = load_qa_chain(llm=llm, chain_type="stuff") - with get_openai_callback() as cb: - response = chain.run(input_documents=docs, question=query) - print(cb) - st.write(response) - - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/Ntabukiraniro/Recipe/args.py b/spaces/Ntabukiraniro/Recipe/args.py deleted file mode 100644 index d9aea99672a0afa5bd1a29e6ac38ba32f9d67ea9..0000000000000000000000000000000000000000 --- a/spaces/Ntabukiraniro/Recipe/args.py +++ /dev/null @@ -1,166 +0,0 @@ -import argparse -import os - - -def get_parser(): - - parser = argparse.ArgumentParser() - - parser.add_argument('--save_dir', type=str, default='path/to/save/models', - help='path where checkpoints will be saved') - - parser.add_argument('--project_name', type=str, default='inversecooking', - help='name of the directory where models will be saved within save_dir') - - parser.add_argument('--model_name', type=str, default='model', - help='save_dir/project_name/model_name will be the path where logs and checkpoints are stored') - - parser.add_argument('--transfer_from', type=str, default='', - help='specify model name to transfer from') - - parser.add_argument('--suff', type=str, default='', - help='the id of the dictionary to load for training') - - parser.add_argument('--image_model', type=str, default='resnet50', choices=['resnet18', 'resnet50', 'resnet101', - 'resnet152', 'inception_v3']) - - parser.add_argument('--recipe1m_dir', type=str, default='path/to/recipe1m', - help='directory where recipe1m dataset is extracted') - - parser.add_argument('--aux_data_dir', type=str, default='../data', - help='path to other necessary data files (eg. vocabularies)') - - parser.add_argument('--crop_size', type=int, default=224, help='size for randomly or center cropping images') - - parser.add_argument('--image_size', type=int, default=256, help='size to rescale images') - - parser.add_argument('--log_step', type=int , default=10, help='step size for printing log info') - - parser.add_argument('--learning_rate', type=float, default=0.001, - help='base learning rate') - - parser.add_argument('--scale_learning_rate_cnn', type=float, default=0.01, - help='lr multiplier for cnn weights') - - parser.add_argument('--lr_decay_rate', type=float, default=0.99, - help='learning rate decay factor') - - parser.add_argument('--lr_decay_every', type=int, default=1, - help='frequency of learning rate decay (default is every epoch)') - - parser.add_argument('--weight_decay', type=float, default=0.) - - parser.add_argument('--embed_size', type=int, default=512, - help='hidden size for all projections') - - parser.add_argument('--n_att', type=int, default=8, - help='number of attention heads in the instruction decoder') - - parser.add_argument('--n_att_ingrs', type=int, default=4, - help='number of attention heads in the ingredient decoder') - - parser.add_argument('--transf_layers', type=int, default=16, - help='number of transformer layers in the instruction decoder') - - parser.add_argument('--transf_layers_ingrs', type=int, default=4, - help='number of transformer layers in the ingredient decoder') - - parser.add_argument('--num_epochs', type=int, default=400, - help='maximum number of epochs') - - parser.add_argument('--batch_size', type=int, default=128) - - parser.add_argument('--num_workers', type=int, default=8) - - parser.add_argument('--dropout_encoder', type=float, default=0.3, - help='dropout ratio for the image and ingredient encoders') - - parser.add_argument('--dropout_decoder_r', type=float, default=0.3, - help='dropout ratio in the instruction decoder') - - parser.add_argument('--dropout_decoder_i', type=float, default=0.3, - help='dropout ratio in the ingredient decoder') - - parser.add_argument('--finetune_after', type=int, default=-1, - help='epoch to start training cnn. -1 is never, 0 is from the beginning') - - parser.add_argument('--loss_weight', nargs='+', type=float, default=[1.0, 0.0, 0.0, 0.0], - help='training loss weights. 1) instruction, 2) ingredient, 3) eos 4) cardinality') - - parser.add_argument('--max_eval', type=int, default=4096, - help='number of validation samples to evaluate during training') - - parser.add_argument('--label_smoothing_ingr', type=float, default=0.1, - help='label smoothing for bce loss for ingredients') - - parser.add_argument('--patience', type=int, default=50, - help='maximum number of epochs to allow before early stopping') - - parser.add_argument('--maxseqlen', type=int, default=15, - help='maximum length of each instruction') - - parser.add_argument('--maxnuminstrs', type=int, default=10, - help='maximum number of instructions') - - parser.add_argument('--maxnumims', type=int, default=5, - help='maximum number of images per sample') - - parser.add_argument('--maxnumlabels', type=int, default=20, - help='maximum number of ingredients per sample') - - parser.add_argument('--es_metric', type=str, default='loss', choices=['loss', 'iou_sample'], - help='early stopping metric to track') - - parser.add_argument('--eval_split', type=str, default='val') - - parser.add_argument('--numgens', type=int, default=3) - - parser.add_argument('--greedy', dest='greedy', action='store_true', - help='enables greedy sampling (inference only)') - parser.set_defaults(greedy=False) - - parser.add_argument('--temperature', type=float, default=1.0, - help='sampling temperature (when greedy is False)') - - parser.add_argument('--beam', type=int, default=-1, - help='beam size. -1 means no beam search (either greedy or sampling)') - - parser.add_argument('--ingrs_only', dest='ingrs_only', action='store_true', - help='train or evaluate the model only for ingredient prediction') - parser.set_defaults(ingrs_only=False) - - parser.add_argument('--recipe_only', dest='recipe_only', action='store_true', - help='train or evaluate the model only for instruction generation') - parser.set_defaults(recipe_only=False) - - parser.add_argument('--log_term', dest='log_term', action='store_true', - help='if used, shows training log in stdout instead of saving it to a file.') - parser.set_defaults(log_term=False) - - parser.add_argument('--notensorboard', dest='tensorboard', action='store_false', - help='if used, tensorboard logs will not be saved') - parser.set_defaults(tensorboard=True) - - parser.add_argument('--resume', dest='resume', action='store_true', - help='resume training from the checkpoint in model_name') - parser.set_defaults(resume=False) - - parser.add_argument('--nodecay_lr', dest='decay_lr', action='store_false', - help='disables learning rate decay') - parser.set_defaults(decay_lr=True) - - parser.add_argument('--load_jpeg', dest='use_lmdb', action='store_false', - help='if used, images are loaded from jpg files instead of lmdb') - parser.set_defaults(use_lmdb=True) - - parser.add_argument('--get_perplexity', dest='get_perplexity', action='store_true', - help='used to get perplexity in evaluation') - parser.set_defaults(get_perplexity=False) - - parser.add_argument('--use_true_ingrs', dest='use_true_ingrs', action='store_true', - help='if used, true ingredients will be used as input to obtain the recipe in evaluation') - parser.set_defaults(use_true_ingrs=False) - - args = parser.parse_args() - - return args diff --git a/spaces/Nyari/Super-Resolution-Anime-Diffusion/RealESRGANv030/realesrgan/utils.py b/spaces/Nyari/Super-Resolution-Anime-Diffusion/RealESRGANv030/realesrgan/utils.py deleted file mode 100644 index 97258f1706cc76773011e24a11bf417ea76ae112..0000000000000000000000000000000000000000 --- a/spaces/Nyari/Super-Resolution-Anime-Diffusion/RealESRGANv030/realesrgan/utils.py +++ /dev/null @@ -1,357 +0,0 @@ -import cv2 -import math -import numpy as np -import os -import queue -import threading -import torch -from basicsr.utils.download_util import load_file_from_url -from torch.nn import functional as F - -ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) - - -class RealESRGANer: - """A helper class for upsampling images with RealESRGAN. - - Args: - scale (int): Upsampling scale factor used in the networks. It is usually 2 or 4. - model_path (str): The path to the pretrained model. It can be urls (will first download it automatically). - model (nn.Module): The defined network. Default: None. - tile (int): As too large images result in the out of GPU memory issue, so this tile option will first crop - input images into tiles, and then process each of them. Finally, they will be merged into one image. - 0 denotes for do not use tile. Default: 0. - tile_pad (int): The pad size for each tile, to remove border artifacts. Default: 10. - pre_pad (int): Pad the input images to avoid border artifacts. Default: 10. - half (float): Whether to use half precision during inference. Default: False. - """ - - def __init__( - self, - scale, - model_path, - dni_weight=None, - model=None, - tile=0, - tile_pad=10, - pre_pad=10, - half=False, - device=None, - gpu_id=None, - ): - self.scale = scale - self.tile_size = tile - self.tile_pad = tile_pad - self.pre_pad = pre_pad - self.mod_scale = None - self.half = half - - # initialize model - if gpu_id: - self.device = ( - torch.device(f"cuda:{gpu_id}" if torch.cuda.is_available() else "cpu") - if device is None - else device - ) - else: - self.device = ( - torch.device("cuda" if torch.cuda.is_available() else "cpu") - if device is None - else device - ) - - if isinstance(model_path, list): - # dni - assert len(model_path) == len( - dni_weight - ), "model_path and dni_weight should have the save length." - loadnet = self.dni(model_path[0], model_path[1], dni_weight) - else: - # if the model_path starts with https, it will first download models to the folder: weights - if model_path.startswith("https://"): - model_path = load_file_from_url( - url=model_path, - model_dir=os.path.join(ROOT_DIR, "weights"), - progress=True, - file_name=None, - ) - loadnet = torch.load(model_path, map_location=torch.device("cpu")) - - # prefer to use params_ema - if "params_ema" in loadnet: - keyname = "params_ema" - else: - keyname = "params" - model.load_state_dict(loadnet[keyname], strict=True) - - model.eval() - self.model = model.to(self.device) - if self.half: - self.model = self.model.half() - - def dni(self, net_a, net_b, dni_weight, key="params", loc="cpu"): - """Deep network interpolation. - - ``Paper: Deep Network Interpolation for Continuous Imagery Effect Transition`` - """ - net_a = torch.load(net_a, map_location=torch.device(loc)) - net_b = torch.load(net_b, map_location=torch.device(loc)) - for k, v_a in net_a[key].items(): - net_a[key][k] = dni_weight[0] * v_a + dni_weight[1] * net_b[key][k] - return net_a - - def pre_process(self, img): - """Pre-process, such as pre-pad and mod pad, so that the images can be divisible""" - img = torch.from_numpy(np.transpose(img, (2, 0, 1))).float() - self.img = img.unsqueeze(0).to(self.device) - if self.half: - self.img = self.img.half() - - # pre_pad - if self.pre_pad != 0: - self.img = F.pad(self.img, (0, self.pre_pad, 0, self.pre_pad), "reflect") - # mod pad for divisible borders - if self.scale == 2: - self.mod_scale = 2 - elif self.scale == 1: - self.mod_scale = 4 - if self.mod_scale is not None: - self.mod_pad_h, self.mod_pad_w = 0, 0 - _, _, h, w = self.img.size() - if h % self.mod_scale != 0: - self.mod_pad_h = self.mod_scale - h % self.mod_scale - if w % self.mod_scale != 0: - self.mod_pad_w = self.mod_scale - w % self.mod_scale - self.img = F.pad( - self.img, (0, self.mod_pad_w, 0, self.mod_pad_h), "reflect" - ) - - def process(self): - # model inference - self.output = self.model(self.img) - - def tile_process(self): - """It will first crop input images to tiles, and then process each tile. - Finally, all the processed tiles are merged into one images. - - Modified from: https://github.com/ata4/esrgan-launcher - """ - batch, channel, height, width = self.img.shape - output_height = height * self.scale - output_width = width * self.scale - output_shape = (batch, channel, output_height, output_width) - - # start with black image - self.output = self.img.new_zeros(output_shape) - tiles_x = math.ceil(width / self.tile_size) - tiles_y = math.ceil(height / self.tile_size) - - # loop over all tiles - for y in range(tiles_y): - for x in range(tiles_x): - # extract tile from input image - ofs_x = x * self.tile_size - ofs_y = y * self.tile_size - # input tile area on total image - input_start_x = ofs_x - input_end_x = min(ofs_x + self.tile_size, width) - input_start_y = ofs_y - input_end_y = min(ofs_y + self.tile_size, height) - - # input tile area on total image with padding - input_start_x_pad = max(input_start_x - self.tile_pad, 0) - input_end_x_pad = min(input_end_x + self.tile_pad, width) - input_start_y_pad = max(input_start_y - self.tile_pad, 0) - input_end_y_pad = min(input_end_y + self.tile_pad, height) - - # input tile dimensions - input_tile_width = input_end_x - input_start_x - input_tile_height = input_end_y - input_start_y - tile_idx = y * tiles_x + x + 1 - input_tile = self.img[ - :, - :, - input_start_y_pad:input_end_y_pad, - input_start_x_pad:input_end_x_pad, - ] - - # upscale tile - try: - with torch.no_grad(): - output_tile = self.model(input_tile) - except RuntimeError as error: - print("Error", error) - print(f"\tTile {tile_idx}/{tiles_x * tiles_y}") - - # output tile area on total image - output_start_x = input_start_x * self.scale - output_end_x = input_end_x * self.scale - output_start_y = input_start_y * self.scale - output_end_y = input_end_y * self.scale - - # output tile area without padding - output_start_x_tile = (input_start_x - input_start_x_pad) * self.scale - output_end_x_tile = output_start_x_tile + input_tile_width * self.scale - output_start_y_tile = (input_start_y - input_start_y_pad) * self.scale - output_end_y_tile = output_start_y_tile + input_tile_height * self.scale - - # put tile into output image - self.output[ - :, :, output_start_y:output_end_y, output_start_x:output_end_x - ] = output_tile[ - :, - :, - output_start_y_tile:output_end_y_tile, - output_start_x_tile:output_end_x_tile, - ] - - def post_process(self): - # remove extra pad - if self.mod_scale is not None: - _, _, h, w = self.output.size() - self.output = self.output[ - :, - :, - 0 : h - self.mod_pad_h * self.scale, - 0 : w - self.mod_pad_w * self.scale, - ] - # remove prepad - if self.pre_pad != 0: - _, _, h, w = self.output.size() - self.output = self.output[ - :, - :, - 0 : h - self.pre_pad * self.scale, - 0 : w - self.pre_pad * self.scale, - ] - return self.output - - @torch.no_grad() - def enhance(self, img, outscale=None, alpha_upsampler="realesrgan"): - h_input, w_input = img.shape[0:2] - # img: numpy - img = img.astype(np.float32) - if np.max(img) > 256: # 16-bit image - max_range = 65535 - print("\tInput is a 16-bit image") - else: - max_range = 255 - img = img / max_range - if len(img.shape) == 2: # gray image - img_mode = "L" - img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) - elif img.shape[2] == 4: # RGBA image with alpha channel - img_mode = "RGBA" - alpha = img[:, :, 3] - img = img[:, :, 0:3] - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - if alpha_upsampler == "realesrgan": - alpha = cv2.cvtColor(alpha, cv2.COLOR_GRAY2RGB) - else: - img_mode = "RGB" - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - - # ------------------- process image (without the alpha channel) ------------------- # - self.pre_process(img) - if self.tile_size > 0: - self.tile_process() - else: - self.process() - output_img = self.post_process() - output_img = output_img.data.squeeze().float().cpu().clamp_(0, 1).numpy() - output_img = np.transpose(output_img[[2, 1, 0], :, :], (1, 2, 0)) - if img_mode == "L": - output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2GRAY) - - # ------------------- process the alpha channel if necessary ------------------- # - if img_mode == "RGBA": - if alpha_upsampler == "realesrgan": - self.pre_process(alpha) - if self.tile_size > 0: - self.tile_process() - else: - self.process() - output_alpha = self.post_process() - output_alpha = ( - output_alpha.data.squeeze().float().cpu().clamp_(0, 1).numpy() - ) - output_alpha = np.transpose(output_alpha[[2, 1, 0], :, :], (1, 2, 0)) - output_alpha = cv2.cvtColor(output_alpha, cv2.COLOR_BGR2GRAY) - else: # use the cv2 resize for alpha channel - h, w = alpha.shape[0:2] - output_alpha = cv2.resize( - alpha, - (w * self.scale, h * self.scale), - interpolation=cv2.INTER_LINEAR, - ) - - # merge the alpha channel - output_img = cv2.cvtColor(output_img, cv2.COLOR_BGR2BGRA) - output_img[:, :, 3] = output_alpha - - # ------------------------------ return ------------------------------ # - if max_range == 65535: # 16-bit image - output = (output_img * 65535.0).round().astype(np.uint16) - else: - output = (output_img * 255.0).round().astype(np.uint8) - - if outscale is not None and outscale != float(self.scale): - output = cv2.resize( - output, - ( - int(w_input * outscale), - int(h_input * outscale), - ), - interpolation=cv2.INTER_LANCZOS4, - ) - - return output, img_mode - - -class PrefetchReader(threading.Thread): - """Prefetch images. - - Args: - img_list (list[str]): A image list of image paths to be read. - num_prefetch_queue (int): Number of prefetch queue. - """ - - def __init__(self, img_list, num_prefetch_queue): - super().__init__() - self.que = queue.Queue(num_prefetch_queue) - self.img_list = img_list - - def run(self): - for img_path in self.img_list: - img = cv2.imread(img_path, cv2.IMREAD_UNCHANGED) - self.que.put(img) - - self.que.put(None) - - def __next__(self): - next_item = self.que.get() - if next_item is None: - raise StopIteration - return next_item - - def __iter__(self): - return self - - -class IOConsumer(threading.Thread): - def __init__(self, opt, que, qid): - super().__init__() - self._queue = que - self.qid = qid - self.opt = opt - - def run(self): - while True: - msg = self._queue.get() - if isinstance(msg, str) and msg == "quit": - break - - output = msg["output"] - save_path = msg["save_path"] - cv2.imwrite(save_path, output) - print(f"IO worker {self.qid} is done.") diff --git a/spaces/OAOA/DifFace/basicsr/ops/dcn/src/deform_conv_cuda.cpp b/spaces/OAOA/DifFace/basicsr/ops/dcn/src/deform_conv_cuda.cpp deleted file mode 100644 index b465c493a3dd67d320b7a8997fbd501d2f89c807..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/basicsr/ops/dcn/src/deform_conv_cuda.cpp +++ /dev/null @@ -1,685 +0,0 @@ -// modify from -// https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/blob/mmdetection/mmdet/ops/dcn/src/deform_conv_cuda.c - -#include -#include - -#include -#include - -void deformable_im2col(const at::Tensor data_im, const at::Tensor data_offset, - const int channels, const int height, const int width, - const int ksize_h, const int ksize_w, const int pad_h, - const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int parallel_imgs, const int deformable_group, - at::Tensor data_col); - -void deformable_col2im(const at::Tensor data_col, const at::Tensor data_offset, - const int channels, const int height, const int width, - const int ksize_h, const int ksize_w, const int pad_h, - const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int parallel_imgs, const int deformable_group, - at::Tensor grad_im); - -void deformable_col2im_coord( - const at::Tensor data_col, const at::Tensor data_im, - const at::Tensor data_offset, const int channels, const int height, - const int width, const int ksize_h, const int ksize_w, const int pad_h, - const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int parallel_imgs, - const int deformable_group, at::Tensor grad_offset); - -void modulated_deformable_im2col_cuda( - const at::Tensor data_im, const at::Tensor data_offset, - const at::Tensor data_mask, const int batch_size, const int channels, - const int height_im, const int width_im, const int height_col, - const int width_col, const int kernel_h, const int kenerl_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int deformable_group, - at::Tensor data_col); - -void modulated_deformable_col2im_cuda( - const at::Tensor data_col, const at::Tensor data_offset, - const at::Tensor data_mask, const int batch_size, const int channels, - const int height_im, const int width_im, const int height_col, - const int width_col, const int kernel_h, const int kenerl_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, const int deformable_group, - at::Tensor grad_im); - -void modulated_deformable_col2im_coord_cuda( - const at::Tensor data_col, const at::Tensor data_im, - const at::Tensor data_offset, const at::Tensor data_mask, - const int batch_size, const int channels, const int height_im, - const int width_im, const int height_col, const int width_col, - const int kernel_h, const int kenerl_w, const int pad_h, const int pad_w, - const int stride_h, const int stride_w, const int dilation_h, - const int dilation_w, const int deformable_group, at::Tensor grad_offset, - at::Tensor grad_mask); - -void shape_check(at::Tensor input, at::Tensor offset, at::Tensor *gradOutput, - at::Tensor weight, int kH, int kW, int dH, int dW, int padH, - int padW, int dilationH, int dilationW, int group, - int deformable_group) { - TORCH_CHECK(weight.ndimension() == 4, - "4D weight tensor (nOutputPlane,nInputPlane,kH,kW) expected, " - "but got: %s", - weight.ndimension()); - - TORCH_CHECK(weight.is_contiguous(), "weight tensor has to be contiguous"); - - TORCH_CHECK(kW > 0 && kH > 0, - "kernel size should be greater than zero, but got kH: %d kW: %d", kH, - kW); - - TORCH_CHECK((weight.size(2) == kH && weight.size(3) == kW), - "kernel size should be consistent with weight, ", - "but got kH: %d kW: %d weight.size(2): %d, weight.size(3): %d", kH, - kW, weight.size(2), weight.size(3)); - - TORCH_CHECK(dW > 0 && dH > 0, - "stride should be greater than zero, but got dH: %d dW: %d", dH, dW); - - TORCH_CHECK( - dilationW > 0 && dilationH > 0, - "dilation should be greater than 0, but got dilationH: %d dilationW: %d", - dilationH, dilationW); - - int ndim = input.ndimension(); - int dimf = 0; - int dimh = 1; - int dimw = 2; - - if (ndim == 4) { - dimf++; - dimh++; - dimw++; - } - - TORCH_CHECK(ndim == 3 || ndim == 4, "3D or 4D input tensor expected but got: %s", - ndim); - - long nInputPlane = weight.size(1) * group; - long inputHeight = input.size(dimh); - long inputWidth = input.size(dimw); - long nOutputPlane = weight.size(0); - long outputHeight = - (inputHeight + 2 * padH - (dilationH * (kH - 1) + 1)) / dH + 1; - long outputWidth = - (inputWidth + 2 * padW - (dilationW * (kW - 1) + 1)) / dW + 1; - - TORCH_CHECK(nInputPlane % deformable_group == 0, - "input channels must divide deformable group size"); - - if (outputWidth < 1 || outputHeight < 1) - AT_ERROR( - "Given input size: (%ld x %ld x %ld). " - "Calculated output size: (%ld x %ld x %ld). Output size is too small", - nInputPlane, inputHeight, inputWidth, nOutputPlane, outputHeight, - outputWidth); - - TORCH_CHECK(input.size(1) == nInputPlane, - "invalid number of input planes, expected: %d, but got: %d", - nInputPlane, input.size(1)); - - TORCH_CHECK((inputHeight >= kH && inputWidth >= kW), - "input image is smaller than kernel"); - - TORCH_CHECK((offset.size(2) == outputHeight && offset.size(3) == outputWidth), - "invalid spatial size of offset, expected height: %d width: %d, but " - "got height: %d width: %d", - outputHeight, outputWidth, offset.size(2), offset.size(3)); - - TORCH_CHECK((offset.size(1) == deformable_group * 2 * kH * kW), - "invalid number of channels of offset"); - - if (gradOutput != NULL) { - TORCH_CHECK(gradOutput->size(dimf) == nOutputPlane, - "invalid number of gradOutput planes, expected: %d, but got: %d", - nOutputPlane, gradOutput->size(dimf)); - - TORCH_CHECK((gradOutput->size(dimh) == outputHeight && - gradOutput->size(dimw) == outputWidth), - "invalid size of gradOutput, expected height: %d width: %d , but " - "got height: %d width: %d", - outputHeight, outputWidth, gradOutput->size(dimh), - gradOutput->size(dimw)); - } -} - -int deform_conv_forward_cuda(at::Tensor input, at::Tensor weight, - at::Tensor offset, at::Tensor output, - at::Tensor columns, at::Tensor ones, int kW, - int kH, int dW, int dH, int padW, int padH, - int dilationW, int dilationH, int group, - int deformable_group, int im2col_step) { - // todo: resize columns to include im2col: done - // todo: add im2col_step as input - // todo: add new output buffer and transpose it to output (or directly - // transpose output) todo: possibly change data indexing because of - // parallel_imgs - - shape_check(input, offset, NULL, weight, kH, kW, dH, dW, padH, padW, - dilationH, dilationW, group, deformable_group); - at::DeviceGuard guard(input.device()); - - input = input.contiguous(); - offset = offset.contiguous(); - weight = weight.contiguous(); - - int batch = 1; - if (input.ndimension() == 3) { - // Force batch - batch = 0; - input.unsqueeze_(0); - offset.unsqueeze_(0); - } - - // todo: assert batchsize dividable by im2col_step - - long batchSize = input.size(0); - long nInputPlane = input.size(1); - long inputHeight = input.size(2); - long inputWidth = input.size(3); - - long nOutputPlane = weight.size(0); - - long outputWidth = - (inputWidth + 2 * padW - (dilationW * (kW - 1) + 1)) / dW + 1; - long outputHeight = - (inputHeight + 2 * padH - (dilationH * (kH - 1) + 1)) / dH + 1; - - TORCH_CHECK((offset.size(0) == batchSize), "invalid batch size of offset"); - - output = output.view({batchSize / im2col_step, im2col_step, nOutputPlane, - outputHeight, outputWidth}); - columns = at::zeros( - {nInputPlane * kW * kH, im2col_step * outputHeight * outputWidth}, - input.options()); - - if (ones.ndimension() != 2 || - ones.size(0) * ones.size(1) < outputHeight * outputWidth) { - ones = at::ones({outputHeight, outputWidth}, input.options()); - } - - input = input.view({batchSize / im2col_step, im2col_step, nInputPlane, - inputHeight, inputWidth}); - offset = - offset.view({batchSize / im2col_step, im2col_step, - deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - - at::Tensor output_buffer = - at::zeros({batchSize / im2col_step, nOutputPlane, - im2col_step * outputHeight, outputWidth}, - output.options()); - - output_buffer = output_buffer.view( - {output_buffer.size(0), group, output_buffer.size(1) / group, - output_buffer.size(2), output_buffer.size(3)}); - - for (int elt = 0; elt < batchSize / im2col_step; elt++) { - deformable_im2col(input[elt], offset[elt], nInputPlane, inputHeight, - inputWidth, kH, kW, padH, padW, dH, dW, dilationH, - dilationW, im2col_step, deformable_group, columns); - - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - weight = weight.view({group, weight.size(0) / group, weight.size(1), - weight.size(2), weight.size(3)}); - - for (int g = 0; g < group; g++) { - output_buffer[elt][g] = output_buffer[elt][g] - .flatten(1) - .addmm_(weight[g].flatten(1), columns[g]) - .view_as(output_buffer[elt][g]); - } - } - - output_buffer = output_buffer.view( - {output_buffer.size(0), output_buffer.size(1) * output_buffer.size(2), - output_buffer.size(3), output_buffer.size(4)}); - - output_buffer = output_buffer.view({batchSize / im2col_step, nOutputPlane, - im2col_step, outputHeight, outputWidth}); - output_buffer.transpose_(1, 2); - output.copy_(output_buffer); - output = output.view({batchSize, nOutputPlane, outputHeight, outputWidth}); - - input = input.view({batchSize, nInputPlane, inputHeight, inputWidth}); - offset = offset.view( - {batchSize, deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - - if (batch == 0) { - output = output.view({nOutputPlane, outputHeight, outputWidth}); - input = input.view({nInputPlane, inputHeight, inputWidth}); - offset = offset.view({offset.size(1), offset.size(2), offset.size(3)}); - } - - return 1; -} - -int deform_conv_backward_input_cuda(at::Tensor input, at::Tensor offset, - at::Tensor gradOutput, at::Tensor gradInput, - at::Tensor gradOffset, at::Tensor weight, - at::Tensor columns, int kW, int kH, int dW, - int dH, int padW, int padH, int dilationW, - int dilationH, int group, - int deformable_group, int im2col_step) { - shape_check(input, offset, &gradOutput, weight, kH, kW, dH, dW, padH, padW, - dilationH, dilationW, group, deformable_group); - at::DeviceGuard guard(input.device()); - - input = input.contiguous(); - offset = offset.contiguous(); - gradOutput = gradOutput.contiguous(); - weight = weight.contiguous(); - - int batch = 1; - - if (input.ndimension() == 3) { - // Force batch - batch = 0; - input = input.view({1, input.size(0), input.size(1), input.size(2)}); - offset = offset.view({1, offset.size(0), offset.size(1), offset.size(2)}); - gradOutput = gradOutput.view( - {1, gradOutput.size(0), gradOutput.size(1), gradOutput.size(2)}); - } - - long batchSize = input.size(0); - long nInputPlane = input.size(1); - long inputHeight = input.size(2); - long inputWidth = input.size(3); - - long nOutputPlane = weight.size(0); - - long outputWidth = - (inputWidth + 2 * padW - (dilationW * (kW - 1) + 1)) / dW + 1; - long outputHeight = - (inputHeight + 2 * padH - (dilationH * (kH - 1) + 1)) / dH + 1; - - TORCH_CHECK((offset.size(0) == batchSize), 3, "invalid batch size of offset"); - gradInput = gradInput.view({batchSize, nInputPlane, inputHeight, inputWidth}); - columns = at::zeros( - {nInputPlane * kW * kH, im2col_step * outputHeight * outputWidth}, - input.options()); - - // change order of grad output - gradOutput = gradOutput.view({batchSize / im2col_step, im2col_step, - nOutputPlane, outputHeight, outputWidth}); - gradOutput.transpose_(1, 2); - - gradInput = gradInput.view({batchSize / im2col_step, im2col_step, nInputPlane, - inputHeight, inputWidth}); - input = input.view({batchSize / im2col_step, im2col_step, nInputPlane, - inputHeight, inputWidth}); - gradOffset = gradOffset.view({batchSize / im2col_step, im2col_step, - deformable_group * 2 * kH * kW, outputHeight, - outputWidth}); - offset = - offset.view({batchSize / im2col_step, im2col_step, - deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - - for (int elt = 0; elt < batchSize / im2col_step; elt++) { - // divide into groups - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - weight = weight.view({group, weight.size(0) / group, weight.size(1), - weight.size(2), weight.size(3)}); - gradOutput = gradOutput.view( - {gradOutput.size(0), group, gradOutput.size(1) / group, - gradOutput.size(2), gradOutput.size(3), gradOutput.size(4)}); - - for (int g = 0; g < group; g++) { - columns[g] = columns[g].addmm_(weight[g].flatten(1).transpose(0, 1), - gradOutput[elt][g].flatten(1), 0.0f, 1.0f); - } - - columns = - columns.view({columns.size(0) * columns.size(1), columns.size(2)}); - gradOutput = gradOutput.view( - {gradOutput.size(0), gradOutput.size(1) * gradOutput.size(2), - gradOutput.size(3), gradOutput.size(4), gradOutput.size(5)}); - - deformable_col2im_coord(columns, input[elt], offset[elt], nInputPlane, - inputHeight, inputWidth, kH, kW, padH, padW, dH, dW, - dilationH, dilationW, im2col_step, deformable_group, - gradOffset[elt]); - - deformable_col2im(columns, offset[elt], nInputPlane, inputHeight, - inputWidth, kH, kW, padH, padW, dH, dW, dilationH, - dilationW, im2col_step, deformable_group, gradInput[elt]); - } - - gradOutput.transpose_(1, 2); - gradOutput = - gradOutput.view({batchSize, nOutputPlane, outputHeight, outputWidth}); - - gradInput = gradInput.view({batchSize, nInputPlane, inputHeight, inputWidth}); - input = input.view({batchSize, nInputPlane, inputHeight, inputWidth}); - gradOffset = gradOffset.view( - {batchSize, deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - offset = offset.view( - {batchSize, deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - - if (batch == 0) { - gradOutput = gradOutput.view({nOutputPlane, outputHeight, outputWidth}); - input = input.view({nInputPlane, inputHeight, inputWidth}); - gradInput = gradInput.view({nInputPlane, inputHeight, inputWidth}); - offset = offset.view({offset.size(1), offset.size(2), offset.size(3)}); - gradOffset = - gradOffset.view({offset.size(1), offset.size(2), offset.size(3)}); - } - - return 1; -} - -int deform_conv_backward_parameters_cuda( - at::Tensor input, at::Tensor offset, at::Tensor gradOutput, - at::Tensor gradWeight, // at::Tensor gradBias, - at::Tensor columns, at::Tensor ones, int kW, int kH, int dW, int dH, - int padW, int padH, int dilationW, int dilationH, int group, - int deformable_group, float scale, int im2col_step) { - // todo: transpose and reshape outGrad - // todo: reshape columns - // todo: add im2col_step as input - - shape_check(input, offset, &gradOutput, gradWeight, kH, kW, dH, dW, padH, - padW, dilationH, dilationW, group, deformable_group); - at::DeviceGuard guard(input.device()); - - input = input.contiguous(); - offset = offset.contiguous(); - gradOutput = gradOutput.contiguous(); - - int batch = 1; - - if (input.ndimension() == 3) { - // Force batch - batch = 0; - input = input.view( - at::IntList({1, input.size(0), input.size(1), input.size(2)})); - gradOutput = gradOutput.view( - {1, gradOutput.size(0), gradOutput.size(1), gradOutput.size(2)}); - } - - long batchSize = input.size(0); - long nInputPlane = input.size(1); - long inputHeight = input.size(2); - long inputWidth = input.size(3); - - long nOutputPlane = gradWeight.size(0); - - long outputWidth = - (inputWidth + 2 * padW - (dilationW * (kW - 1) + 1)) / dW + 1; - long outputHeight = - (inputHeight + 2 * padH - (dilationH * (kH - 1) + 1)) / dH + 1; - - TORCH_CHECK((offset.size(0) == batchSize), "invalid batch size of offset"); - - columns = at::zeros( - {nInputPlane * kW * kH, im2col_step * outputHeight * outputWidth}, - input.options()); - - gradOutput = gradOutput.view({batchSize / im2col_step, im2col_step, - nOutputPlane, outputHeight, outputWidth}); - gradOutput.transpose_(1, 2); - - at::Tensor gradOutputBuffer = at::zeros_like(gradOutput); - gradOutputBuffer = - gradOutputBuffer.view({batchSize / im2col_step, nOutputPlane, im2col_step, - outputHeight, outputWidth}); - gradOutputBuffer.copy_(gradOutput); - gradOutputBuffer = - gradOutputBuffer.view({batchSize / im2col_step, nOutputPlane, - im2col_step * outputHeight, outputWidth}); - - gradOutput.transpose_(1, 2); - gradOutput = - gradOutput.view({batchSize, nOutputPlane, outputHeight, outputWidth}); - - input = input.view({batchSize / im2col_step, im2col_step, nInputPlane, - inputHeight, inputWidth}); - offset = - offset.view({batchSize / im2col_step, im2col_step, - deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - - for (int elt = 0; elt < batchSize / im2col_step; elt++) { - deformable_im2col(input[elt], offset[elt], nInputPlane, inputHeight, - inputWidth, kH, kW, padH, padW, dH, dW, dilationH, - dilationW, im2col_step, deformable_group, columns); - - // divide into group - gradOutputBuffer = gradOutputBuffer.view( - {gradOutputBuffer.size(0), group, gradOutputBuffer.size(1) / group, - gradOutputBuffer.size(2), gradOutputBuffer.size(3)}); - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - gradWeight = - gradWeight.view({group, gradWeight.size(0) / group, gradWeight.size(1), - gradWeight.size(2), gradWeight.size(3)}); - - for (int g = 0; g < group; g++) { - gradWeight[g] = gradWeight[g] - .flatten(1) - .addmm_(gradOutputBuffer[elt][g].flatten(1), - columns[g].transpose(1, 0), 1.0, scale) - .view_as(gradWeight[g]); - } - gradOutputBuffer = gradOutputBuffer.view( - {gradOutputBuffer.size(0), - gradOutputBuffer.size(1) * gradOutputBuffer.size(2), - gradOutputBuffer.size(3), gradOutputBuffer.size(4)}); - columns = - columns.view({columns.size(0) * columns.size(1), columns.size(2)}); - gradWeight = gradWeight.view({gradWeight.size(0) * gradWeight.size(1), - gradWeight.size(2), gradWeight.size(3), - gradWeight.size(4)}); - } - - input = input.view({batchSize, nInputPlane, inputHeight, inputWidth}); - offset = offset.view( - {batchSize, deformable_group * 2 * kH * kW, outputHeight, outputWidth}); - - if (batch == 0) { - gradOutput = gradOutput.view({nOutputPlane, outputHeight, outputWidth}); - input = input.view({nInputPlane, inputHeight, inputWidth}); - } - - return 1; -} - -void modulated_deform_conv_cuda_forward( - at::Tensor input, at::Tensor weight, at::Tensor bias, at::Tensor ones, - at::Tensor offset, at::Tensor mask, at::Tensor output, at::Tensor columns, - int kernel_h, int kernel_w, const int stride_h, const int stride_w, - const int pad_h, const int pad_w, const int dilation_h, - const int dilation_w, const int group, const int deformable_group, - const bool with_bias) { - TORCH_CHECK(input.is_contiguous(), "input tensor has to be contiguous"); - TORCH_CHECK(weight.is_contiguous(), "weight tensor has to be contiguous"); - at::DeviceGuard guard(input.device()); - - const int batch = input.size(0); - const int channels = input.size(1); - const int height = input.size(2); - const int width = input.size(3); - - const int channels_out = weight.size(0); - const int channels_kernel = weight.size(1); - const int kernel_h_ = weight.size(2); - const int kernel_w_ = weight.size(3); - - if (kernel_h_ != kernel_h || kernel_w_ != kernel_w) - AT_ERROR("Input shape and kernel shape won't match: (%d x %d vs %d x %d).", - kernel_h_, kernel_w, kernel_h_, kernel_w_); - if (channels != channels_kernel * group) - AT_ERROR("Input shape and kernel channels won't match: (%d vs %d).", - channels, channels_kernel * group); - - const int height_out = - (height + 2 * pad_h - (dilation_h * (kernel_h - 1) + 1)) / stride_h + 1; - const int width_out = - (width + 2 * pad_w - (dilation_w * (kernel_w - 1) + 1)) / stride_w + 1; - - if (ones.ndimension() != 2 || - ones.size(0) * ones.size(1) < height_out * width_out) { - // Resize plane and fill with ones... - ones = at::ones({height_out, width_out}, input.options()); - } - - // resize output - output = output.view({batch, channels_out, height_out, width_out}).zero_(); - // resize temporary columns - columns = - at::zeros({channels * kernel_h * kernel_w, 1 * height_out * width_out}, - input.options()); - - output = output.view({output.size(0), group, output.size(1) / group, - output.size(2), output.size(3)}); - - for (int b = 0; b < batch; b++) { - modulated_deformable_im2col_cuda( - input[b], offset[b], mask[b], 1, channels, height, width, height_out, - width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, deformable_group, columns); - - // divide into group - weight = weight.view({group, weight.size(0) / group, weight.size(1), - weight.size(2), weight.size(3)}); - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - - for (int g = 0; g < group; g++) { - output[b][g] = output[b][g] - .flatten(1) - .addmm_(weight[g].flatten(1), columns[g]) - .view_as(output[b][g]); - } - - weight = weight.view({weight.size(0) * weight.size(1), weight.size(2), - weight.size(3), weight.size(4)}); - columns = - columns.view({columns.size(0) * columns.size(1), columns.size(2)}); - } - - output = output.view({output.size(0), output.size(1) * output.size(2), - output.size(3), output.size(4)}); - - if (with_bias) { - output += bias.view({1, bias.size(0), 1, 1}); - } -} - -void modulated_deform_conv_cuda_backward( - at::Tensor input, at::Tensor weight, at::Tensor bias, at::Tensor ones, - at::Tensor offset, at::Tensor mask, at::Tensor columns, - at::Tensor grad_input, at::Tensor grad_weight, at::Tensor grad_bias, - at::Tensor grad_offset, at::Tensor grad_mask, at::Tensor grad_output, - int kernel_h, int kernel_w, int stride_h, int stride_w, int pad_h, - int pad_w, int dilation_h, int dilation_w, int group, int deformable_group, - const bool with_bias) { - TORCH_CHECK(input.is_contiguous(), "input tensor has to be contiguous"); - TORCH_CHECK(weight.is_contiguous(), "weight tensor has to be contiguous"); - at::DeviceGuard guard(input.device()); - - const int batch = input.size(0); - const int channels = input.size(1); - const int height = input.size(2); - const int width = input.size(3); - - const int channels_kernel = weight.size(1); - const int kernel_h_ = weight.size(2); - const int kernel_w_ = weight.size(3); - if (kernel_h_ != kernel_h || kernel_w_ != kernel_w) - AT_ERROR("Input shape and kernel shape won't match: (%d x %d vs %d x %d).", - kernel_h_, kernel_w, kernel_h_, kernel_w_); - if (channels != channels_kernel * group) - AT_ERROR("Input shape and kernel channels won't match: (%d vs %d).", - channels, channels_kernel * group); - - const int height_out = - (height + 2 * pad_h - (dilation_h * (kernel_h - 1) + 1)) / stride_h + 1; - const int width_out = - (width + 2 * pad_w - (dilation_w * (kernel_w - 1) + 1)) / stride_w + 1; - - if (ones.ndimension() != 2 || - ones.size(0) * ones.size(1) < height_out * width_out) { - // Resize plane and fill with ones... - ones = at::ones({height_out, width_out}, input.options()); - } - - grad_input = grad_input.view({batch, channels, height, width}); - columns = at::zeros({channels * kernel_h * kernel_w, height_out * width_out}, - input.options()); - - grad_output = - grad_output.view({grad_output.size(0), group, grad_output.size(1) / group, - grad_output.size(2), grad_output.size(3)}); - - for (int b = 0; b < batch; b++) { - // divide int group - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - weight = weight.view({group, weight.size(0) / group, weight.size(1), - weight.size(2), weight.size(3)}); - - for (int g = 0; g < group; g++) { - columns[g].addmm_(weight[g].flatten(1).transpose(0, 1), - grad_output[b][g].flatten(1), 0.0f, 1.0f); - } - - columns = - columns.view({columns.size(0) * columns.size(1), columns.size(2)}); - weight = weight.view({weight.size(0) * weight.size(1), weight.size(2), - weight.size(3), weight.size(4)}); - - // gradient w.r.t. input coordinate data - modulated_deformable_col2im_coord_cuda( - columns, input[b], offset[b], mask[b], 1, channels, height, width, - height_out, width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, - stride_w, dilation_h, dilation_w, deformable_group, grad_offset[b], - grad_mask[b]); - // gradient w.r.t. input data - modulated_deformable_col2im_cuda( - columns, offset[b], mask[b], 1, channels, height, width, height_out, - width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, deformable_group, grad_input[b]); - - // gradient w.r.t. weight, dWeight should accumulate across the batch and - // group - modulated_deformable_im2col_cuda( - input[b], offset[b], mask[b], 1, channels, height, width, height_out, - width_out, kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, deformable_group, columns); - - columns = columns.view({group, columns.size(0) / group, columns.size(1)}); - grad_weight = grad_weight.view({group, grad_weight.size(0) / group, - grad_weight.size(1), grad_weight.size(2), - grad_weight.size(3)}); - if (with_bias) - grad_bias = grad_bias.view({group, grad_bias.size(0) / group}); - - for (int g = 0; g < group; g++) { - grad_weight[g] = - grad_weight[g] - .flatten(1) - .addmm_(grad_output[b][g].flatten(1), columns[g].transpose(0, 1)) - .view_as(grad_weight[g]); - if (with_bias) { - grad_bias[g] = - grad_bias[g] - .view({-1, 1}) - .addmm_(grad_output[b][g].flatten(1), ones.view({-1, 1})) - .view(-1); - } - } - - columns = - columns.view({columns.size(0) * columns.size(1), columns.size(2)}); - grad_weight = grad_weight.view({grad_weight.size(0) * grad_weight.size(1), - grad_weight.size(2), grad_weight.size(3), - grad_weight.size(4)}); - if (with_bias) - grad_bias = grad_bias.view({grad_bias.size(0) * grad_bias.size(1)}); - } - grad_output = grad_output.view({grad_output.size(0) * grad_output.size(1), - grad_output.size(2), grad_output.size(3), - grad_output.size(4)}); -} diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_recognition/new/decoders/decoder_config.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_recognition/new/decoders/decoder_config.py deleted file mode 100644 index 659eb94a9b8187a7c126d7b439ac2742f9d72022..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_recognition/new/decoders/decoder_config.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math -from dataclasses import dataclass, field -from typing import Optional - -from fairseq.dataclass.configs import FairseqDataclass -from fairseq.dataclass.constants import ChoiceEnum -from omegaconf import MISSING - - -DECODER_CHOICES = ChoiceEnum(["viterbi", "kenlm", "fairseqlm"]) - - -@dataclass -class DecoderConfig(FairseqDataclass): - type: DECODER_CHOICES = field( - default="viterbi", - metadata={"help": "The type of decoder to use"}, - ) - - -@dataclass -class FlashlightDecoderConfig(FairseqDataclass): - nbest: int = field( - default=1, - metadata={"help": "Number of decodings to return"}, - ) - unitlm: bool = field( - default=False, - metadata={"help": "If set, use unit language model"}, - ) - lmpath: str = field( - default=MISSING, - metadata={"help": "Language model for KenLM decoder"}, - ) - lexicon: Optional[str] = field( - default=None, - metadata={"help": "Lexicon for Flashlight decoder"}, - ) - beam: int = field( - default=50, - metadata={"help": "Number of beams to use for decoding"}, - ) - beamthreshold: float = field( - default=50.0, - metadata={"help": "Threshold for beam search decoding"}, - ) - beamsizetoken: Optional[int] = field( - default=None, metadata={"help": "Beam size to use"} - ) - wordscore: float = field( - default=-1, - metadata={"help": "Word score for KenLM decoder"}, - ) - unkweight: float = field( - default=-math.inf, - metadata={"help": "Unknown weight for KenLM decoder"}, - ) - silweight: float = field( - default=0, - metadata={"help": "Silence weight for KenLM decoder"}, - ) - lmweight: float = field( - default=2, - metadata={"help": "Weight for LM while interpolating score"}, - ) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/decode.sh b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/decode.sh deleted file mode 100644 index 811cb63c88bb7cdd03b0a250ef2db32b5eaa50df..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/decode.sh +++ /dev/null @@ -1,38 +0,0 @@ -#!/bin/bash - -set -u - -val_sets="dev_other" -graph_name=graph -decode_suffix="" -decode_script="steps/decode_fmllr.sh" -decode_args="" -nj=60 - -. ./cmd.sh -. ./path.sh -. parse_options.sh - -set -x -exp_dir=$1 -data_root=$2 -lang_test=$3 - -graph=$exp_dir/$graph_name - -if [ ! -d $graph ]; then - utils/mkgraph.sh $lang_test $exp_dir $graph -fi - -for part in $val_sets; do - dec_dir=$exp_dir/decode${decode_suffix}_${part} - if [ ! -d $dec_dir ]; then - echo "decoding $part for $exp_dir" - $decode_script --nj $nj --cmd "$decode_cmd" $decode_args \ - $graph $data_root/$part $dec_dir & - else - echo "$dec_dir exists. skip" - fi -done - -wait diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/clib/cuda/ngram_repeat_block_cuda.cpp b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/clib/cuda/ngram_repeat_block_cuda.cpp deleted file mode 100644 index 707219105a17a691e43de1296a72bbaffa0c7fe9..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/clib/cuda/ngram_repeat_block_cuda.cpp +++ /dev/null @@ -1,55 +0,0 @@ -/* -Copyright (c) Microsoft Corporation. -Licensed under the MIT License. -*/ - -#include -#include - -/* -CPP Binding for CUDA OP -*/ - -// CUDA forward declarations -torch::Tensor ngram_repeat_block_cuda_forward( - torch::Tensor tokens, - torch::Tensor lprobs, - int bsz, - int step, - int beam_size, - int no_repeat_ngram_size); - -#define CHECK_CUDA(x) \ - TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) \ - TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) \ - CHECK_CUDA(x); \ - CHECK_CONTIGUOUS(x) - -// Input check and call to CUDA OP -// Backward method not required -torch::Tensor ngram_repeat_block_forward( - torch::Tensor tokens, - torch::Tensor lprobs, - int bsz, - int step, - int beam_size, - int no_repeat_ngram_size) { - CHECK_INPUT(tokens); - CHECK_INPUT(lprobs); - assert(bsz > 0); - assert(step >= 0); - assert(beam_size > 0); - assert(no_repeat_ngram_size > 0); - - return ngram_repeat_block_cuda_forward( - tokens, lprobs, bsz, step, beam_size, no_repeat_ngram_size); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def( - "forward", - &ngram_repeat_block_forward, - "No Repeat Ngram Block forward (CUDA)"); -} diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/multilingual/data_scripts/download_iwslt_and_extract.sh b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/multilingual/data_scripts/download_iwslt_and_extract.sh deleted file mode 100644 index ca3591b3db1715f136773d62e4b9b9ede97d436c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/multilingual/data_scripts/download_iwslt_and_extract.sh +++ /dev/null @@ -1,225 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -#echo 'Cloning Moses github repository (for tokenization scripts)...' -#git clone https://github.com/moses-smt/mosesdecoder.git - -if [ -z $WORKDIR_ROOT ] ; -then - echo "please specify your working directory root in environment variable WORKDIR_ROOT. Exitting..." - exit -fi - - - -data_root=${WORKDIR_ROOT}/iwsltv2 -DESTDIR=${WORKDIR_ROOT}/ML50/raw - - -langs="ar_AR it_IT nl_XX ko_KR vi_VN" -echo "data_root: $data_root" - -download_path=${data_root}/downloads -raw=${DESTDIR} -tmp=${data_root}/tmp -orig=${data_root}/orig - -mkdir -p $download_path $orig $raw $tmp -####################### -download_iwslt(){ - iwslt_key=$1 - src=$2 - tgt=$3 - save_prefix=$4 - pushd ${download_path} - if [[ ! -f ${save_prefix}$src-$tgt.tgz ]]; then - wget https://wit3.fbk.eu/archive/${iwslt_key}/texts/$src/$tgt/$src-$tgt.tgz -O ${save_prefix}$src-$tgt.tgz - [ $? -eq 0 ] && return 0 - fi - popd -} - -extract_iwslt(){ - src=$1 - tgt=$2 - prefix=$3 - pushd $orig - tar zxvf ${download_path}/${prefix}$src-${tgt}.tgz - popd -} - -generate_train(){ - lsrc=$1 - ltgt=$2 - src=${lsrc:0:2} - tgt=${ltgt:0:2} - for ll in $lsrc $ltgt; do - l=${ll:0:2} - f="$orig/*/train.tags.$src-$tgt.$l" - f_raw=$raw/train.$lsrc-$ltgt.$ll - cat $f \ - | grep -v '' \ - | grep -v '' \ - | grep -v '' \ - | grep -v '' \ - | grep -v '' \ - | sed -e 's///g' \ - | sed -e 's/<\/title>//g' \ - | sed -e 's/<description>//g' \ - | sed -e 's/<\/description>//g' \ - | sed 's/^\s*//g' \ - | sed 's/\s*$//g' \ - > $f_raw - [ $? -eq 0 ] && echo "extracted $f to $f_raw" - done - return 0 -} - -convert_valid_test(){ - src=$1 - tgt=$2 - for l in $src $tgt; do - echo "lang: ${l}" - for o in `ls $orig/*/IWSLT*.TED*.$src-$tgt.$l.xml`; do - fname=${o##*/} - f=$tmp/${fname%.*} - echo "$o => $f" - grep '<seg id' $o \ - | sed -e 's/<seg id="[0-9]*">\s*//g' \ - | sed -e 's/\s*<\/seg>\s*//g' \ - | sed -e "s/\’/\'/g" \ - > $f - echo "" - done - done -} - -generate_subset(){ - lsrc=$1 - ltgt=$2 - src=${lsrc:0:2} - tgt=${ltgt:0:2} - subset=$3 - prefix=$4 - for ll in $lsrc $ltgt; do - l=${ll:0:2} - f=$tmp/$prefix.${src}-${tgt}.$l - if [[ -f $f ]]; then - cp $f $raw/$subset.${lsrc}-$ltgt.${ll} - fi - done -} -################# - -echo "downloading iwslt training and dev data" -# using multilingual for it, nl -download_iwslt "2017-01-trnmted" DeEnItNlRo DeEnItNlRo -download_iwslt "2017-01-trnted" ar en -download_iwslt "2017-01-trnted" en ar -download_iwslt "2017-01-trnted" ko en -download_iwslt "2017-01-trnted" en ko -download_iwslt "2015-01" vi en -download_iwslt "2015-01" en vi - -echo "donwloading iwslt test data" -download_iwslt "2017-01-mted-test" it en "test." -download_iwslt "2017-01-mted-test" en it "test." -download_iwslt "2017-01-mted-test" nl en "test." -download_iwslt "2017-01-mted-test" en nl "test." - -download_iwslt "2017-01-ted-test" ar en "test." -download_iwslt "2017-01-ted-test" en ar "test." -download_iwslt "2017-01-ted-test" ko en "test." -download_iwslt "2017-01-ted-test" en ko "test." -download_iwslt "2015-01-test" vi en "test." -download_iwslt "2015-01-test" en vi "test." - -echo "extract training data tar balls" -extract_iwslt DeEnItNlRo DeEnItNlRo -extract_iwslt ar en -extract_iwslt en ar -extract_iwslt ko en -extract_iwslt en ko -extract_iwslt vi en -extract_iwslt en vi - - -echo "extracting iwslt test data" -for lang in $langs; do - l=${lang:0:2} - extract_iwslt $l en "test." - extract_iwslt en $l "test." -done - -echo "convert dev and test data" -for lang in $langs; do - s_lang=${lang:0:2} - convert_valid_test $s_lang en - convert_valid_test en $s_lang -done - - - -echo "creating training data into $raw" -for lang in $langs; do - generate_train $lang en_XX - generate_train en_XX $lang -done - -echo "creating iwslt dev data into raw" -generate_subset en_XX vi_VN valid "IWSLT15.TED.tst2013" -generate_subset vi_VN en_XX valid "IWSLT15.TED.tst2013" - -generate_subset en_XX ar_AR valid "IWSLT17.TED.tst2016" -generate_subset ar_AR en_XX valid "IWSLT17.TED.tst2016" -generate_subset en_XX ko_KR valid "IWSLT17.TED.tst2016" -generate_subset ko_KR en_XX valid "IWSLT17.TED.tst2016" - - -generate_subset en_XX it_IT valid "IWSLT17.TED.tst2010" -generate_subset it_IT en_XX valid "IWSLT17.TED.tst2010" -generate_subset en_XX nl_XX valid "IWSLT17.TED.tst2010" -generate_subset nl_XX en_XX valid "IWSLT17.TED.tst2010" - -echo "creating iswslt test data into raw" -generate_subset en_XX vi_VN test "IWSLT15.TED.tst2015" -generate_subset vi_VN en_XX test "IWSLT15.TED.tst2015" - -generate_subset en_XX ar_AR test "IWSLT17.TED.tst2017" -generate_subset ar_AR en_XX test "IWSLT17.TED.tst2017" -generate_subset en_XX ko_KR test "IWSLT17.TED.tst2017" -generate_subset ko_KR en_XX test "IWSLT17.TED.tst2017" - -generate_subset en_XX it_IT test "IWSLT17.TED.tst2017.mltlng" -generate_subset it_IT en_XX test "IWSLT17.TED.tst2017.mltlng" -generate_subset en_XX nl_XX test "IWSLT17.TED.tst2017.mltlng" -generate_subset nl_XX en_XX test "IWSLT17.TED.tst2017.mltlng" - -# normalze iwslt directions into x-en -pushd $raw -for lang in $langs; do - for split in test valid; do - x_en_f1=$split.$lang-en_XX.en_XX - x_en_f2=$split.$lang-en_XX.${lang} - - en_x_f1=$split.en_XX-$lang.en_XX - en_x_f2=$split.en_XX-$lang.${lang} - - if [ -f $en_x_f1 ] && [ ! -f $x_en_f1 ]; then - echo "cp $en_x_f1 $x_en_f1" - cp $en_x_f1 $x_en_f1 - fi - if [ -f $x_en_f2 ] && [ ! -f $x_en_f2 ]; then - echo "cp $en_x_f2 $x_en_f2" - cp $en_x_f2 $x_en_f2 - fi - done -done -popd \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/hubert/simple_kmeans/feature_utils.py b/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/hubert/simple_kmeans/feature_utils.py deleted file mode 100644 index f80bc4569768fac181133cdc8f76d1230e03bff6..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Visual_Grounding/fairseq/examples/hubert/simple_kmeans/feature_utils.py +++ /dev/null @@ -1,66 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import sys - -import tqdm -from npy_append_array import NpyAppendArray - - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("feature_utils") - - -def get_shard_range(tot, nshard, rank): - assert rank < nshard and rank >= 0, f"invaid rank/nshard {rank}/{nshard}" - start = round(tot / nshard * rank) - end = round(tot / nshard * (rank + 1)) - assert start < end, f"start={start}, end={end}" - logger.info( - f"rank {rank} of {nshard}, process {end-start} " - f"({start}-{end}) out of {tot}" - ) - return start, end - - -def get_path_iterator(tsv, nshard, rank): - with open(tsv, "r") as f: - root = f.readline().rstrip() - lines = [line.rstrip() for line in f] - start, end = get_shard_range(len(lines), nshard, rank) - lines = lines[start:end] - def iterate(): - for line in lines: - subpath, nsample = line.split("\t") - yield f"{root}/{subpath}", int(nsample) - return iterate, len(lines) - - -def dump_feature(reader, generator, num, split, nshard, rank, feat_dir): - iterator = generator() - - feat_path = f"{feat_dir}/{split}_{rank}_{nshard}.npy" - leng_path = f"{feat_dir}/{split}_{rank}_{nshard}.len" - - os.makedirs(feat_dir, exist_ok=True) - if os.path.exists(feat_path): - os.remove(feat_path) - - feat_f = NpyAppendArray(feat_path) - with open(leng_path, "w") as leng_f: - for path, nsample in tqdm.tqdm(iterator, total=num): - feat = reader.get_feats(path, nsample) - feat_f.append(feat.cpu().numpy()) - leng_f.write(f"{len(feat)}\n") - logger.info("finished successfully") - - diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/bart/summarize.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/bart/summarize.py deleted file mode 100644 index 04435f80e39c2d9d894696dae7cba5b381e13da9..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/bart/summarize.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from fairseq.models.bart import BARTModel -import argparse - -XSUM_KWARGS = dict(beam=6, lenpen=1.0, max_len_b=60, min_len=10, no_repeat_ngram_size=3) -CNN_KWARGS = dict(beam=4, lenpen=2.0, max_len_b=140, min_len=55, no_repeat_ngram_size=3) - - -@torch.no_grad() -def generate(bart, infile, outfile="bart_hypo.txt", bsz=32, n_obs=None, **eval_kwargs): - count = 1 - - # if n_obs is not None: bsz = min(bsz, n_obs) - - with open(infile) as source, open(outfile, "w") as fout: - sline = source.readline().strip() - slines = [sline] - for sline in source: - if n_obs is not None and count > n_obs: - break - if count % bsz == 0: - hypotheses_batch = bart.sample(slines, **eval_kwargs) - for hypothesis in hypotheses_batch: - fout.write(hypothesis + "\n") - fout.flush() - slines = [] - - slines.append(sline.strip()) - count += 1 - - if slines != []: - hypotheses_batch = bart.sample(slines, **eval_kwargs) - for hypothesis in hypotheses_batch: - fout.write(hypothesis + "\n") - fout.flush() - - -def main(): - """ - Usage:: - - python examples/bart/summarize.py \ - --model-dir $HOME/bart.large.cnn \ - --model-file model.pt \ - --src $HOME/data-bin/cnn_dm/test.source - """ - parser = argparse.ArgumentParser() - parser.add_argument( - "--model-dir", - required=True, - type=str, - default="bart.large.cnn/", - help="path containing model file and src_dict.txt", - ) - parser.add_argument( - "--model-file", - default="checkpoint_best.pt", - help="where in model_dir are weights saved", - ) - parser.add_argument( - "--src", default="test.source", help="text to summarize", type=str - ) - parser.add_argument( - "--out", default="test.hypo", help="where to save summaries", type=str - ) - parser.add_argument("--bsz", default=32, help="where to save summaries", type=int) - parser.add_argument( - "--n", default=None, help="how many examples to summarize", type=int - ) - parser.add_argument( - "--xsum-kwargs", - action="store_true", - default=False, - help="if true use XSUM_KWARGS else CNN_KWARGS", - ) - args = parser.parse_args() - eval_kwargs = XSUM_KWARGS if args.xsum_kwargs else CNN_KWARGS - if args.model_dir == "pytorch/fairseq": - bart = torch.hub.load("pytorch/fairseq", args.model_file) - else: - bart = BARTModel.from_pretrained( - args.model_dir, - checkpoint_file=args.model_file, - data_name_or_path=args.model_dir, - ) - bart = bart.eval() - if torch.cuda.is_available(): - bart = bart.cuda().half() - generate( - bart, args.src, bsz=args.bsz, n_obs=args.n, outfile=args.out, **eval_kwargs - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/translation/README.md b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/translation/README.md deleted file mode 100644 index 2941f5eb8482dab61dca5eca27a71abd7ee5bf5c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/translation/README.md +++ /dev/null @@ -1,301 +0,0 @@ -# Neural Machine Translation - -This README contains instructions for [using pretrained translation models](#example-usage-torchhub) -as well as [training new models](#training-a-new-model). - -## Pre-trained models - -Model | Description | Dataset | Download ----|---|---|--- -`conv.wmt14.en-fr` | Convolutional <br> ([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | model: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.v2.en-fr.fconv-py.tar.bz2) <br> newstest2014: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.newstest2014.tar.bz2) <br> newstest2012/2013: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.ntst1213.tar.bz2) -`conv.wmt14.en-de` | Convolutional <br> ([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT14 English-German](http://statmt.org/wmt14/translation-task.html#Download) | model: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.en-de.fconv-py.tar.bz2) <br> newstest2014: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-de.newstest2014.tar.bz2) -`conv.wmt17.en-de` | Convolutional <br> ([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT17 English-German](http://statmt.org/wmt17/translation-task.html#Download) | model: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt17.v2.en-de.fconv-py.tar.bz2) <br> newstest2014: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt17.v2.en-de.newstest2014.tar.bz2) -`transformer.wmt14.en-fr` | Transformer <br> ([Ott et al., 2018](https://arxiv.org/abs/1806.00187)) | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | model: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.en-fr.joined-dict.transformer.tar.bz2) <br> newstest2014: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-fr.joined-dict.newstest2014.tar.bz2) -`transformer.wmt16.en-de` | Transformer <br> ([Ott et al., 2018](https://arxiv.org/abs/1806.00187)) | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt16.en-de.joined-dict.transformer.tar.bz2) <br> newstest2014: <br> [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2) -`transformer.wmt18.en-de` | Transformer <br> ([Edunov et al., 2018](https://arxiv.org/abs/1808.09381)) <br> WMT'18 winner | [WMT'18 English-German](http://www.statmt.org/wmt18/translation-task.html) | model: <br> [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt18.en-de.ensemble.tar.gz) <br> See NOTE in the archive -`transformer.wmt19.en-de` | Transformer <br> ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) <br> WMT'19 winner | [WMT'19 English-German](http://www.statmt.org/wmt19/translation-task.html) | model: <br> [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-de.joined-dict.ensemble.tar.gz) -`transformer.wmt19.de-en` | Transformer <br> ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) <br> WMT'19 winner | [WMT'19 German-English](http://www.statmt.org/wmt19/translation-task.html) | model: <br> [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.de-en.joined-dict.ensemble.tar.gz) -`transformer.wmt19.en-ru` | Transformer <br> ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) <br> WMT'19 winner | [WMT'19 English-Russian](http://www.statmt.org/wmt19/translation-task.html) | model: <br> [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-ru.ensemble.tar.gz) -`transformer.wmt19.ru-en` | Transformer <br> ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) <br> WMT'19 winner | [WMT'19 Russian-English](http://www.statmt.org/wmt19/translation-task.html) | model: <br> [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.ru-en.ensemble.tar.gz) - -## Example usage (torch.hub) - -We require a few additional Python dependencies for preprocessing: -```bash -pip install fastBPE sacremoses subword_nmt -``` - -Interactive translation via PyTorch Hub: -```python -import torch - -# List available models -torch.hub.list('pytorch/fairseq') # [..., 'transformer.wmt16.en-de', ... ] - -# Load a transformer trained on WMT'16 En-De -# Note: WMT'19 models use fastBPE instead of subword_nmt, see instructions below -en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt16.en-de', - tokenizer='moses', bpe='subword_nmt') -en2de.eval() # disable dropout - -# The underlying model is available under the *models* attribute -assert isinstance(en2de.models[0], fairseq.models.transformer.TransformerModel) - -# Move model to GPU for faster translation -en2de.cuda() - -# Translate a sentence -en2de.translate('Hello world!') -# 'Hallo Welt!' - -# Batched translation -en2de.translate(['Hello world!', 'The cat sat on the mat.']) -# ['Hallo Welt!', 'Die Katze saß auf der Matte.'] -``` - -Loading custom models: -```python -from fairseq.models.transformer import TransformerModel -zh2en = TransformerModel.from_pretrained( - '/path/to/checkpoints', - checkpoint_file='checkpoint_best.pt', - data_name_or_path='data-bin/wmt17_zh_en_full', - bpe='subword_nmt', - bpe_codes='data-bin/wmt17_zh_en_full/zh.code' -) -zh2en.translate('你好 世界') -# 'Hello World' -``` - -If you are using a `transformer.wmt19` models, you will need to set the `bpe` -argument to `'fastbpe'` and (optionally) load the 4-model ensemble: -```python -en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de', - checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt', - tokenizer='moses', bpe='fastbpe') -en2de.eval() # disable dropout -``` - -## Example usage (CLI tools) - -Generation with the binarized test sets can be run in batch mode as follows, e.g. for WMT 2014 English-French on a GTX-1080ti: -```bash -mkdir -p data-bin -curl https://dl.fbaipublicfiles.com/fairseq/models/wmt14.v2.en-fr.fconv-py.tar.bz2 | tar xvjf - -C data-bin -curl https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.newstest2014.tar.bz2 | tar xvjf - -C data-bin -fairseq-generate data-bin/wmt14.en-fr.newstest2014 \ - --path data-bin/wmt14.en-fr.fconv-py/model.pt \ - --beam 5 --batch-size 128 --remove-bpe | tee /tmp/gen.out -# ... -# | Translated 3003 sentences (96311 tokens) in 166.0s (580.04 tokens/s) -# | Generate test with beam=5: BLEU4 = 40.83, 67.5/46.9/34.4/25.5 (BP=1.000, ratio=1.006, syslen=83262, reflen=82787) - -# Compute BLEU score -grep ^H /tmp/gen.out | cut -f3- > /tmp/gen.out.sys -grep ^T /tmp/gen.out | cut -f2- > /tmp/gen.out.ref -fairseq-score --sys /tmp/gen.out.sys --ref /tmp/gen.out.ref -# BLEU4 = 40.83, 67.5/46.9/34.4/25.5 (BP=1.000, ratio=1.006, syslen=83262, reflen=82787) -``` - -## Training a new model - -### IWSLT'14 German to English (Transformer) - -The following instructions can be used to train a Transformer model on the [IWSLT'14 German to English dataset](http://workshop2014.iwslt.org/downloads/proceeding.pdf). - -First download and preprocess the data: -```bash -# Download and prepare the data -cd examples/translation/ -bash prepare-iwslt14.sh -cd ../.. - -# Preprocess/binarize the data -TEXT=examples/translation/iwslt14.tokenized.de-en -fairseq-preprocess --source-lang de --target-lang en \ - --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \ - --destdir data-bin/iwslt14.tokenized.de-en \ - --workers 20 -``` - -Next we'll train a Transformer translation model over this data: -```bash -CUDA_VISIBLE_DEVICES=0 fairseq-train \ - data-bin/iwslt14.tokenized.de-en \ - --arch transformer_iwslt_de_en --share-decoder-input-output-embed \ - --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 \ - --lr 5e-4 --lr-scheduler inverse_sqrt --warmup-updates 4000 \ - --dropout 0.3 --weight-decay 0.0001 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --max-tokens 4096 \ - --eval-bleu \ - --eval-bleu-args '{"beam": 5, "max_len_a": 1.2, "max_len_b": 10}' \ - --eval-bleu-detok moses \ - --eval-bleu-remove-bpe \ - --eval-bleu-print-samples \ - --best-checkpoint-metric bleu --maximize-best-checkpoint-metric -``` - -Finally we can evaluate our trained model: -```bash -fairseq-generate data-bin/iwslt14.tokenized.de-en \ - --path checkpoints/checkpoint_best.pt \ - --batch-size 128 --beam 5 --remove-bpe -``` - -### WMT'14 English to German (Convolutional) - -The following instructions can be used to train a Convolutional translation model on the WMT English to German dataset. -See the [Scaling NMT README](../scaling_nmt/README.md) for instructions to train a Transformer translation model on this data. - -The WMT English to German dataset can be preprocessed using the `prepare-wmt14en2de.sh` script. -By default it will produce a dataset that was modeled after [Attention Is All You Need (Vaswani et al., 2017)](https://arxiv.org/abs/1706.03762), but with additional news-commentary-v12 data from WMT'17. - -To use only data available in WMT'14 or to replicate results obtained in the original [Convolutional Sequence to Sequence Learning (Gehring et al., 2017)](https://arxiv.org/abs/1705.03122) paper, please use the `--icml17` option. - -```bash -# Download and prepare the data -cd examples/translation/ -# WMT'17 data: -bash prepare-wmt14en2de.sh -# or to use WMT'14 data: -# bash prepare-wmt14en2de.sh --icml17 -cd ../.. - -# Binarize the dataset -TEXT=examples/translation/wmt17_en_de -fairseq-preprocess \ - --source-lang en --target-lang de \ - --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \ - --destdir data-bin/wmt17_en_de --thresholdtgt 0 --thresholdsrc 0 \ - --workers 20 - -# Train the model -mkdir -p checkpoints/fconv_wmt_en_de -fairseq-train \ - data-bin/wmt17_en_de \ - --arch fconv_wmt_en_de \ - --dropout 0.2 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --optimizer nag --clip-norm 0.1 \ - --lr 0.5 --lr-scheduler fixed --force-anneal 50 \ - --max-tokens 4000 \ - --save-dir checkpoints/fconv_wmt_en_de - -# Evaluate -fairseq-generate data-bin/wmt17_en_de \ - --path checkpoints/fconv_wmt_en_de/checkpoint_best.pt \ - --beam 5 --remove-bpe -``` - -### WMT'14 English to French -```bash -# Download and prepare the data -cd examples/translation/ -bash prepare-wmt14en2fr.sh -cd ../.. - -# Binarize the dataset -TEXT=examples/translation/wmt14_en_fr -fairseq-preprocess \ - --source-lang en --target-lang fr \ - --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \ - --destdir data-bin/wmt14_en_fr --thresholdtgt 0 --thresholdsrc 0 \ - --workers 60 - -# Train the model -mkdir -p checkpoints/fconv_wmt_en_fr -fairseq-train \ - data-bin/wmt14_en_fr \ - --arch fconv_wmt_en_fr \ - --dropout 0.1 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --optimizer nag --clip-norm 0.1 \ - --lr 0.5 --lr-scheduler fixed --force-anneal 50 \ - --max-tokens 3000 \ - --save-dir checkpoints/fconv_wmt_en_fr - -# Evaluate -fairseq-generate \ - data-bin/fconv_wmt_en_fr \ - --path checkpoints/fconv_wmt_en_fr/checkpoint_best.pt \ - --beam 5 --remove-bpe -``` - -## Multilingual Translation - -We also support training multilingual translation models. In this example we'll -train a multilingual `{de,fr}-en` translation model using the IWSLT'17 datasets. - -Note that we use slightly different preprocessing here than for the IWSLT'14 -En-De data above. In particular we learn a joint BPE code for all three -languages and use fairseq-interactive and sacrebleu for scoring the test set. - -```bash -# First install sacrebleu and sentencepiece -pip install sacrebleu sentencepiece - -# Then download and preprocess the data -cd examples/translation/ -bash prepare-iwslt17-multilingual.sh -cd ../.. - -# Binarize the de-en dataset -TEXT=examples/translation/iwslt17.de_fr.en.bpe16k -fairseq-preprocess --source-lang de --target-lang en \ - --trainpref $TEXT/train.bpe.de-en \ - --validpref $TEXT/valid0.bpe.de-en,$TEXT/valid1.bpe.de-en,$TEXT/valid2.bpe.de-en,$TEXT/valid3.bpe.de-en,$TEXT/valid4.bpe.de-en,$TEXT/valid5.bpe.de-en \ - --destdir data-bin/iwslt17.de_fr.en.bpe16k \ - --workers 10 - -# Binarize the fr-en dataset -# NOTE: it's important to reuse the en dictionary from the previous step -fairseq-preprocess --source-lang fr --target-lang en \ - --trainpref $TEXT/train.bpe.fr-en \ - --validpref $TEXT/valid0.bpe.fr-en,$TEXT/valid1.bpe.fr-en,$TEXT/valid2.bpe.fr-en,$TEXT/valid3.bpe.fr-en,$TEXT/valid4.bpe.fr-en,$TEXT/valid5.bpe.fr-en \ - --tgtdict data-bin/iwslt17.de_fr.en.bpe16k/dict.en.txt \ - --destdir data-bin/iwslt17.de_fr.en.bpe16k \ - --workers 10 - -# Train a multilingual transformer model -# NOTE: the command below assumes 1 GPU, but accumulates gradients from -# 8 fwd/bwd passes to simulate training on 8 GPUs -mkdir -p checkpoints/multilingual_transformer -CUDA_VISIBLE_DEVICES=0 fairseq-train data-bin/iwslt17.de_fr.en.bpe16k/ \ - --max-epoch 50 \ - --ddp-backend=legacy_ddp \ - --task multilingual_translation --lang-pairs de-en,fr-en \ - --arch multilingual_transformer_iwslt_de_en \ - --share-decoders --share-decoder-input-output-embed \ - --optimizer adam --adam-betas '(0.9, 0.98)' \ - --lr 0.0005 --lr-scheduler inverse_sqrt \ - --warmup-updates 4000 --warmup-init-lr '1e-07' \ - --label-smoothing 0.1 --criterion label_smoothed_cross_entropy \ - --dropout 0.3 --weight-decay 0.0001 \ - --save-dir checkpoints/multilingual_transformer \ - --max-tokens 4000 \ - --update-freq 8 - -# Generate and score the test set with sacrebleu -SRC=de -sacrebleu --test-set iwslt17 --language-pair ${SRC}-en --echo src \ - | python scripts/spm_encode.py --model examples/translation/iwslt17.de_fr.en.bpe16k/sentencepiece.bpe.model \ - > iwslt17.test.${SRC}-en.${SRC}.bpe -cat iwslt17.test.${SRC}-en.${SRC}.bpe \ - | fairseq-interactive data-bin/iwslt17.de_fr.en.bpe16k/ \ - --task multilingual_translation --lang-pairs de-en,fr-en \ - --source-lang ${SRC} --target-lang en \ - --path checkpoints/multilingual_transformer/checkpoint_best.pt \ - --buffer-size 2000 --batch-size 128 \ - --beam 5 --remove-bpe=sentencepiece \ - > iwslt17.test.${SRC}-en.en.sys -grep ^H iwslt17.test.${SRC}-en.en.sys | cut -f3 \ - | sacrebleu --test-set iwslt17 --language-pair ${SRC}-en -``` - -##### Argument format during inference - -During inference it is required to specify a single `--source-lang` and -`--target-lang`, which indicates the inference langauge direction. -`--lang-pairs`, `--encoder-langtok`, `--decoder-langtok` have to be set to -the same value as training. diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/decode.sh b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/decode.sh deleted file mode 100644 index 811cb63c88bb7cdd03b0a250ef2db32b5eaa50df..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/local/decode.sh +++ /dev/null @@ -1,38 +0,0 @@ -#!/bin/bash - -set -u - -val_sets="dev_other" -graph_name=graph -decode_suffix="" -decode_script="steps/decode_fmllr.sh" -decode_args="" -nj=60 - -. ./cmd.sh -. ./path.sh -. parse_options.sh - -set -x -exp_dir=$1 -data_root=$2 -lang_test=$3 - -graph=$exp_dir/$graph_name - -if [ ! -d $graph ]; then - utils/mkgraph.sh $lang_test $exp_dir $graph -fi - -for part in $val_sets; do - dec_dir=$exp_dir/decode${decode_suffix}_${part} - if [ ! -d $dec_dir ]; then - echo "decoding $part for $exp_dir" - $decode_script --nj $nj --cmd "$decode_cmd" $decode_args \ - $graph $data_root/$part $dec_dir & - else - echo "$dec_dir exists. skip" - fi -done - -wait diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_multi_corpus_sampled_dataset.py b/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_multi_corpus_sampled_dataset.py deleted file mode 100644 index 05b20328c5605178767d138cc75e070824679842..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/test_multi_corpus_sampled_dataset.py +++ /dev/null @@ -1,95 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import unittest -from collections import OrderedDict - -import numpy as np -import torch -from fairseq.data import LanguagePairDataset, TokenBlockDataset -from fairseq.data.multi_corpus_sampled_dataset import MultiCorpusSampledDataset -from tests.test_train import mock_dict - - -class TestMultiCorpusSampledDataset(unittest.TestCase): - def setUp(self): - d = mock_dict() - tokens_1 = torch.LongTensor([1]).view(1, -1) - tokens_ds1 = TokenBlockDataset( - tokens_1, - sizes=[tokens_1.size(-1)], - block_size=1, - pad=0, - eos=1, - include_targets=False, - ) - self.dataset_1 = LanguagePairDataset( - tokens_ds1, tokens_ds1.sizes, d, shuffle=False - ) - tokens_2 = torch.LongTensor([2]).view(1, -1) - tokens_ds2 = TokenBlockDataset( - tokens_2, - sizes=[tokens_2.size(-1)], - block_size=1, - pad=0, - eos=1, - include_targets=False, - ) - self.dataset_2 = LanguagePairDataset( - tokens_ds2, tokens_ds2.sizes, d, shuffle=False - ) - - def _test_sample_helper( - self, - expected_sample_from_first_ds_percentage, - num_samples=1000, - sampling_func=None, - ): - # To make sure test is not flaky - np.random.seed(0) - if sampling_func is None: - m = MultiCorpusSampledDataset( - OrderedDict({0: self.dataset_1, 1: self.dataset_2}), - ) - else: - m = MultiCorpusSampledDataset( - OrderedDict({0: self.dataset_1, 1: self.dataset_2}), - sampling_func=sampling_func, - ) - m.ordered_indices() - count_sample_from_first_dataset = 0 - for _ in range(num_samples): - if m.collater([m[0], m[1]])["net_input"]["src_tokens"][0] == 1: - count_sample_from_first_dataset += 1 - sample_from_first_ds_percentage = ( - 1.0 * count_sample_from_first_dataset / num_samples - ) - self.assertLess( - abs( - sample_from_first_ds_percentage - - expected_sample_from_first_ds_percentage - ), - 0.01, - ) - - def test_multi_corpus_sampled_dataset_uniform_sample(self): - self._test_sample_helper(expected_sample_from_first_ds_percentage=0.5) - - def test_multi_corpus_sampled_dataset_weighted_sample(self): - def naive_weighted_sample(weights): - def f(l): - v = np.random.random() - agg = 0 - for i, weight in enumerate(weights): - agg += weight - if agg > v: - return i - - return f - - self._test_sample_helper( - expected_sample_from_first_ds_percentage=0.9, - sampling_func=naive_weighted_sample(weights=[0.9, 0.1]), - ) diff --git a/spaces/OIUGLK/bingo/src/lib/bots/bing/index.ts b/spaces/OIUGLK/bingo/src/lib/bots/bing/index.ts deleted file mode 100644 index 2c4afae01a345b8415935228566cb30d695e768d..0000000000000000000000000000000000000000 --- a/spaces/OIUGLK/bingo/src/lib/bots/bing/index.ts +++ /dev/null @@ -1,421 +0,0 @@ -import { fetch, WebSocket, debug } from '@/lib/isomorphic' -import WebSocketAsPromised from 'websocket-as-promised' -import { - SendMessageParams, - BingConversationStyle, - ConversationResponse, - ChatResponseMessage, - ConversationInfo, - InvocationEventType, - ChatError, - ErrorCode, - ChatUpdateCompleteResponse, - ImageInfo, - KBlobResponse -} from './types' - -import { convertMessageToMarkdown, websocketUtils, streamAsyncIterable } from './utils' -import { WatchDog, createChunkDecoder } from '@/lib/utils' - -type Params = SendMessageParams<{ bingConversationStyle: BingConversationStyle }> - -const OPTIONS_SETS = [ - 'nlu_direct_response_filter', - 'deepleo', - 'disable_emoji_spoken_text', - 'responsible_ai_policy_235', - 'enablemm', - 'iycapbing', - 'iyxapbing', - 'objopinion', - 'rweasgv2', - 'dagslnv1', - 'dv3sugg', - 'autosave', - 'iyoloxap', - 'iyoloneutral', - 'clgalileo', - 'gencontentv3', -] - -export class BingWebBot { - protected conversationContext?: ConversationInfo - protected cookie: string - protected ua: string - protected endpoint = '' - private lastText = '' - private asyncTasks: Array<Promise<any>> = [] - - constructor(opts: { - cookie: string - ua: string - bingConversationStyle?: BingConversationStyle - conversationContext?: ConversationInfo - }) { - const { cookie, ua, conversationContext } = opts - this.cookie = cookie?.includes(';') ? cookie : `_EDGE_V=1; _U=${cookie}` - this.ua = ua - this.conversationContext = conversationContext - } - - static buildChatRequest(conversation: ConversationInfo) { - const optionsSets = OPTIONS_SETS - if (conversation.conversationStyle === BingConversationStyle.Precise) { - optionsSets.push('h3precise') - } else if (conversation.conversationStyle === BingConversationStyle.Creative) { - optionsSets.push('h3imaginative') - } - return { - arguments: [ - { - source: 'cib', - optionsSets, - allowedMessageTypes: [ - 'Chat', - 'InternalSearchQuery', - 'Disengaged', - 'InternalLoaderMessage', - 'SemanticSerp', - 'GenerateContentQuery', - 'SearchQuery', - ], - sliceIds: [ - 'winmuid1tf', - 'anssupfor_c', - 'imgchatgptv2', - 'tts2cf', - 'contansperf', - 'mlchatpc8500w', - 'mlchatpc2', - 'ctrlworkpay', - 'winshortmsgtf', - 'cibctrl', - 'sydtransctrl', - 'sydconfigoptc', - '0705trt4', - '517opinion', - '628ajcopus0', - '330uaugs0', - '529rwea', - '0626snptrcs0', - '424dagslnv1', - ], - isStartOfSession: conversation.invocationId === 0, - message: { - author: 'user', - inputMethod: 'Keyboard', - text: conversation.prompt, - imageUrl: conversation.imageUrl, - messageType: 'Chat', - }, - conversationId: conversation.conversationId, - conversationSignature: conversation.conversationSignature, - participant: { id: conversation.clientId }, - }, - ], - invocationId: conversation.invocationId.toString(), - target: 'chat', - type: InvocationEventType.StreamInvocation, - } - } - - async createConversation(): Promise<ConversationResponse> { - const headers = { - 'Accept-Encoding': 'gzip, deflate, br, zsdch', - 'User-Agent': this.ua, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: this.cookie, - } - - let resp: ConversationResponse | undefined - try { - const response = await fetch(this.endpoint + '/api/create', { method: 'POST', headers, redirect: 'error', mode: 'cors', credentials: 'include' }) - if (response.status === 404) { - throw new ChatError('Not Found', ErrorCode.NOTFOUND_ERROR) - } - resp = await response.json() as ConversationResponse - } catch (err) { - console.error('create conversation error', err) - } - - if (!resp?.result) { - throw new ChatError('Invalid response', ErrorCode.UNKOWN_ERROR) - } - - const { value, message } = resp.result || {} - if (value !== 'Success') { - const errorMsg = `${value}: ${message}` - if (value === 'UnauthorizedRequest') { - throw new ChatError(errorMsg, ErrorCode.BING_UNAUTHORIZED) - } - if (value === 'Forbidden') { - throw new ChatError(errorMsg, ErrorCode.BING_FORBIDDEN) - } - throw new ChatError(errorMsg, ErrorCode.UNKOWN_ERROR) - } - return resp - } - - private async createContext(conversationStyle: BingConversationStyle) { - if (!this.conversationContext) { - const conversation = await this.createConversation() - this.conversationContext = { - conversationId: conversation.conversationId, - conversationSignature: conversation.conversationSignature, - clientId: conversation.clientId, - invocationId: 0, - conversationStyle, - prompt: '', - } - } - return this.conversationContext - } - - async sendMessage(params: Params) { - try { - await this.createContext(params.options.bingConversationStyle) - Object.assign(this.conversationContext!, { prompt: params.prompt, imageUrl: params.imageUrl }) - return this.sydneyProxy(params) - } catch (error) { - params.onEvent({ - type: 'ERROR', - error: error instanceof ChatError ? error : new ChatError('Catch Error', ErrorCode.UNKOWN_ERROR), - }) - } - } - - private async sydneyProxy(params: Params) { - const abortController = new AbortController() - const response = await fetch(this.endpoint + '/api/sydney', { - method: 'POST', - headers: { - 'Content-Type': 'application/json', - }, - signal: abortController.signal, - body: JSON.stringify(this.conversationContext!) - }) - if (response.status !== 200) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - 'Unknown error', - ErrorCode.UNKOWN_ERROR, - ), - }) - } - params.signal?.addEventListener('abort', () => { - abortController.abort() - }) - - const textDecoder = createChunkDecoder() - for await (const chunk of streamAsyncIterable(response.body!)) { - this.parseEvents(params, websocketUtils.unpackMessage(textDecoder(chunk))) - } - } - - async sendWs() { - const wsConfig: ConstructorParameters<typeof WebSocketAsPromised>[1] = { - packMessage: websocketUtils.packMessage, - unpackMessage: websocketUtils.unpackMessage, - createWebSocket: (url) => new WebSocket(url, { - headers: { - 'accept-language': 'zh-CN,zh;q=0.9', - 'cache-control': 'no-cache', - 'User-Agent': this.ua, - pragma: 'no-cache', - cookie: this.cookie, - } - }) - } - const wsp = new WebSocketAsPromised('wss://sydney.bing.com/sydney/ChatHub', wsConfig) - - wsp.open().then(() => { - wsp.sendPacked({ protocol: 'json', version: 1 }) - wsp.sendPacked({ type: 6 }) - wsp.sendPacked(BingWebBot.buildChatRequest(this.conversationContext!)) - }) - - return wsp - } - - private async useWs(params: Params) { - const wsp = await this.sendWs() - const watchDog = new WatchDog() - wsp.onUnpackedMessage.addListener((events) => { - watchDog.watch(() => { - wsp.sendPacked({ type: 6 }) - }) - this.parseEvents(params, events) - }) - - wsp.onClose.addListener(() => { - watchDog.reset() - params.onEvent({ type: 'DONE' }) - wsp.removeAllListeners() - }) - - params.signal?.addEventListener('abort', () => { - wsp.removeAllListeners() - wsp.close() - }) - } - - private async createImage(prompt: string, id: string) { - try { - const headers = { - 'Accept-Encoding': 'gzip, deflate, br, zsdch', - 'User-Agent': this.ua, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: this.cookie, - } - const query = new URLSearchParams({ - prompt, - id - }) - const response = await fetch(this.endpoint + '/api/image?' + query.toString(), - { - method: 'POST', - headers, - mode: 'cors', - credentials: 'include' - }) - .then(res => res.text()) - if (response) { - this.lastText += '\n' + response - } - } catch (err) { - console.error('Create Image Error', err) - } - } - - private buildKnowledgeApiPayload(imageUrl: string, conversationStyle: BingConversationStyle) { - const imageInfo: ImageInfo = {} - let imageBase64: string | undefined = undefined - const knowledgeRequest = { - imageInfo, - knowledgeRequest: { - invokedSkills: [ - 'ImageById' - ], - subscriptionId: 'Bing.Chat.Multimodal', - invokedSkillsRequestData: { - enableFaceBlur: true - }, - convoData: { - convoid: this.conversationContext?.conversationId, - convotone: conversationStyle, - } - }, - } - - if (imageUrl.startsWith('data:image/')) { - imageBase64 = imageUrl.replace('data:image/', ''); - const partIndex = imageBase64.indexOf(',') - if (partIndex) { - imageBase64 = imageBase64.substring(partIndex + 1) - } - } else { - imageInfo.url = imageUrl - } - return { knowledgeRequest, imageBase64 } - } - - async uploadImage(imageUrl: string, conversationStyle: BingConversationStyle = BingConversationStyle.Creative): Promise<KBlobResponse | undefined> { - if (!imageUrl) { - return - } - await this.createContext(conversationStyle) - const payload = this.buildKnowledgeApiPayload(imageUrl, conversationStyle) - - const response = await fetch(this.endpoint + '/api/kblob', - { - headers: { - 'Content-Type': 'application/json', - }, - method: 'POST', - mode: 'cors', - credentials: 'include', - body: JSON.stringify(payload), - }) - .then(res => res.json()) - .catch(e => { - console.log('Error', e) - }) - return response - } - - private async generateContent(message: ChatResponseMessage) { - if (message.contentType === 'IMAGE') { - this.asyncTasks.push(this.createImage(message.text, message.messageId)) - } - } - - private async parseEvents(params: Params, events: any) { - const conversation = this.conversationContext! - - events?.forEach(async (event: ChatUpdateCompleteResponse) => { - debug('bing event', event) - if (event.type === 3) { - await Promise.all(this.asyncTasks) - this.asyncTasks = [] - params.onEvent({ type: 'UPDATE_ANSWER', data: { text: this.lastText } }) - params.onEvent({ type: 'DONE' }) - conversation.invocationId = parseInt(event.invocationId, 10) + 1 - } else if (event.type === 1) { - const messages = event.arguments[0].messages - if (messages) { - const text = convertMessageToMarkdown(messages[0]) - this.lastText = text - params.onEvent({ type: 'UPDATE_ANSWER', data: { text, spokenText: messages[0].text, throttling: event.arguments[0].throttling } }) - } - } else if (event.type === 2) { - const messages = event.item.messages as ChatResponseMessage[] | undefined - if (!messages) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - event.item.result.error || 'Unknown error', - event.item.result.value === 'Throttled' ? ErrorCode.THROTTLE_LIMIT - : event.item.result.value === 'CaptchaChallenge' ? (this.conversationContext?.conversationId?.includes('BingProdUnAuthenticatedUsers') ? ErrorCode.BING_UNAUTHORIZED : ErrorCode.BING_CAPTCHA) - : ErrorCode.UNKOWN_ERROR - ), - }) - return - } - const limited = messages.some((message) => - message.contentOrigin === 'TurnLimiter' - || message.messageType === 'Disengaged' - ) - if (limited) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - 'Sorry, you have reached chat limit in this conversation.', - ErrorCode.CONVERSATION_LIMIT, - ), - }) - return - } - - const lastMessage = event.item.messages.at(-1) as ChatResponseMessage - const specialMessage = event.item.messages.find(message => message.author === 'bot' && message.contentType === 'IMAGE') - if (specialMessage) { - this.generateContent(specialMessage) - } - - if (lastMessage) { - const text = convertMessageToMarkdown(lastMessage) - this.lastText = text - params.onEvent({ - type: 'UPDATE_ANSWER', - data: { text, throttling: event.item.throttling, suggestedResponses: lastMessage.suggestedResponses, sourceAttributions: lastMessage.sourceAttributions }, - }) - } - } - }) - } - - resetConversation() { - this.conversationContext = undefined - } -} diff --git a/spaces/ORI-Muchim/MarinTTS/text/cleaners.py b/spaces/ORI-Muchim/MarinTTS/text/cleaners.py deleted file mode 100644 index 57d924f38f3c58bc53ac23aab3f5c58da2bf26f6..0000000000000000000000000000000000000000 --- a/spaces/ORI-Muchim/MarinTTS/text/cleaners.py +++ /dev/null @@ -1,17 +0,0 @@ -import re - -def japanese_cleaners(text): - from text.japanese import japanese_to_romaji_with_accent - text = japanese_to_romaji_with_accent(text) - if len(text) == 0 or re.match('[A-Za-z]', text[-1]): - text += '.' - return text - - -def japanese_cleaners2(text): - text = text.replace('・・・', '…').replace('・', ' ') - text = japanese_cleaners(text).replace('ts', 'ʦ').replace('...', '…') \ - .replace('(', '').replace(')', '') \ - .replace('[', '').replace(']', '') \ - .replace('*', ' ').replace('{', '').replace('}', '') - return text diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/losses/constants.py b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/losses/constants.py deleted file mode 100644 index ae3e5e151342232be8e2c2a77fe6fd5798dc2a8c..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/saicinpainting/training/losses/constants.py +++ /dev/null @@ -1,152 +0,0 @@ -weights = {"ade20k": - [6.34517766497462, - 9.328358208955224, - 11.389521640091116, - 16.10305958132045, - 20.833333333333332, - 22.22222222222222, - 25.125628140703515, - 43.29004329004329, - 50.5050505050505, - 54.6448087431694, - 55.24861878453038, - 60.24096385542168, - 62.5, - 66.2251655629139, - 84.74576271186442, - 90.90909090909092, - 91.74311926605505, - 96.15384615384616, - 96.15384615384616, - 97.08737864077669, - 102.04081632653062, - 135.13513513513513, - 149.2537313432836, - 153.84615384615384, - 163.93442622950818, - 166.66666666666666, - 188.67924528301887, - 192.30769230769232, - 217.3913043478261, - 227.27272727272725, - 227.27272727272725, - 227.27272727272725, - 303.03030303030306, - 322.5806451612903, - 333.3333333333333, - 370.3703703703703, - 384.61538461538464, - 416.6666666666667, - 416.6666666666667, - 434.7826086956522, - 434.7826086956522, - 454.5454545454545, - 454.5454545454545, - 500.0, - 526.3157894736842, - 526.3157894736842, - 555.5555555555555, - 555.5555555555555, - 555.5555555555555, - 555.5555555555555, - 555.5555555555555, - 555.5555555555555, - 555.5555555555555, - 588.2352941176471, - 588.2352941176471, - 588.2352941176471, - 588.2352941176471, - 588.2352941176471, - 666.6666666666666, - 666.6666666666666, - 666.6666666666666, - 666.6666666666666, - 714.2857142857143, - 714.2857142857143, - 714.2857142857143, - 714.2857142857143, - 714.2857142857143, - 769.2307692307693, - 769.2307692307693, - 769.2307692307693, - 833.3333333333334, - 833.3333333333334, - 833.3333333333334, - 833.3333333333334, - 909.090909090909, - 1000.0, - 1111.111111111111, - 1111.111111111111, - 1111.111111111111, - 1111.111111111111, - 1111.111111111111, - 1250.0, - 1250.0, - 1250.0, - 1250.0, - 1250.0, - 1428.5714285714287, - 1428.5714285714287, - 1428.5714285714287, - 1428.5714285714287, - 1428.5714285714287, - 1428.5714285714287, - 1428.5714285714287, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 1666.6666666666667, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2000.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 2500.0, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 3333.3333333333335, - 5000.0, - 5000.0, - 5000.0] -} \ No newline at end of file diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/nn/modules/replicate.py b/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/nn/modules/replicate.py deleted file mode 100644 index b71c7b8ed51a1d6c55b1f753bdd8d90bad79bd06..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/models/ade20k/segm_lib/nn/modules/replicate.py +++ /dev/null @@ -1,94 +0,0 @@ -# -*- coding: utf-8 -*- -# File : replicate.py -# Author : Jiayuan Mao -# Email : maojiayuan@gmail.com -# Date : 27/01/2018 -# -# This file is part of Synchronized-BatchNorm-PyTorch. -# https://github.com/vacancy/Synchronized-BatchNorm-PyTorch -# Distributed under MIT License. - -import functools - -from torch.nn.parallel.data_parallel import DataParallel - -__all__ = [ - 'CallbackContext', - 'execute_replication_callbacks', - 'DataParallelWithCallback', - 'patch_replication_callback' -] - - -class CallbackContext(object): - pass - - -def execute_replication_callbacks(modules): - """ - Execute an replication callback `__data_parallel_replicate__` on each module created by original replication. - - The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)` - - Note that, as all modules are isomorphism, we assign each sub-module with a context - (shared among multiple copies of this module on different devices). - Through this context, different copies can share some information. - - We guarantee that the callback on the master copy (the first copy) will be called ahead of calling the callback - of any slave copies. - """ - master_copy = modules[0] - nr_modules = len(list(master_copy.modules())) - ctxs = [CallbackContext() for _ in range(nr_modules)] - - for i, module in enumerate(modules): - for j, m in enumerate(module.modules()): - if hasattr(m, '__data_parallel_replicate__'): - m.__data_parallel_replicate__(ctxs[j], i) - - -class DataParallelWithCallback(DataParallel): - """ - Data Parallel with a replication callback. - - An replication callback `__data_parallel_replicate__` of each module will be invoked after being created by - original `replicate` function. - The callback will be invoked with arguments `__data_parallel_replicate__(ctx, copy_id)` - - Examples: - > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) - > sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1]) - # sync_bn.__data_parallel_replicate__ will be invoked. - """ - - def replicate(self, module, device_ids): - modules = super(DataParallelWithCallback, self).replicate(module, device_ids) - execute_replication_callbacks(modules) - return modules - - -def patch_replication_callback(data_parallel): - """ - Monkey-patch an existing `DataParallel` object. Add the replication callback. - Useful when you have customized `DataParallel` implementation. - - Examples: - > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) - > sync_bn = DataParallel(sync_bn, device_ids=[0, 1]) - > patch_replication_callback(sync_bn) - # this is equivalent to - > sync_bn = SynchronizedBatchNorm1d(10, eps=1e-5, affine=False) - > sync_bn = DataParallelWithCallback(sync_bn, device_ids=[0, 1]) - """ - - assert isinstance(data_parallel, DataParallel) - - old_replicate = data_parallel.replicate - - @functools.wraps(old_replicate) - def new_replicate(module, device_ids): - modules = old_replicate(module, device_ids) - execute_replication_callbacks(modules) - return modules - - data_parallel.replicate = new_replicate diff --git a/spaces/OptimalScale/Robin-33b/lmflow/models/regression_model.py b/spaces/OptimalScale/Robin-33b/lmflow/models/regression_model.py deleted file mode 100644 index 43d0dfc7b06ec134cd81d19a2cf4e78c3af77ce9..0000000000000000000000000000000000000000 --- a/spaces/OptimalScale/Robin-33b/lmflow/models/regression_model.py +++ /dev/null @@ -1,11 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -"""General regression model.""" - -from lmflow.models.base_model import BaseModel - - -class RegressionModel(BaseModel): - - def __init__(self, *args, **kwargs): - pass diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/epoch_based_runner.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/epoch_based_runner.py deleted file mode 100644 index 766a9ce6afdf09cd11b1b15005f5132583011348..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/runner/epoch_based_runner.py +++ /dev/null @@ -1,187 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -import platform -import shutil -import time -import warnings - -import torch - -import annotator.uniformer.mmcv as mmcv -from .base_runner import BaseRunner -from .builder import RUNNERS -from .checkpoint import save_checkpoint -from .utils import get_host_info - - -@RUNNERS.register_module() -class EpochBasedRunner(BaseRunner): - """Epoch-based Runner. - - This runner train models epoch by epoch. - """ - - def run_iter(self, data_batch, train_mode, **kwargs): - if self.batch_processor is not None: - outputs = self.batch_processor( - self.model, data_batch, train_mode=train_mode, **kwargs) - elif train_mode: - outputs = self.model.train_step(data_batch, self.optimizer, - **kwargs) - else: - outputs = self.model.val_step(data_batch, self.optimizer, **kwargs) - if not isinstance(outputs, dict): - raise TypeError('"batch_processor()" or "model.train_step()"' - 'and "model.val_step()" must return a dict') - if 'log_vars' in outputs: - self.log_buffer.update(outputs['log_vars'], outputs['num_samples']) - self.outputs = outputs - - def train(self, data_loader, **kwargs): - self.model.train() - self.mode = 'train' - self.data_loader = data_loader - self._max_iters = self._max_epochs * len(self.data_loader) - self.call_hook('before_train_epoch') - time.sleep(2) # Prevent possible deadlock during epoch transition - for i, data_batch in enumerate(self.data_loader): - self._inner_iter = i - self.call_hook('before_train_iter') - self.run_iter(data_batch, train_mode=True, **kwargs) - self.call_hook('after_train_iter') - self._iter += 1 - - self.call_hook('after_train_epoch') - self._epoch += 1 - - @torch.no_grad() - def val(self, data_loader, **kwargs): - self.model.eval() - self.mode = 'val' - self.data_loader = data_loader - self.call_hook('before_val_epoch') - time.sleep(2) # Prevent possible deadlock during epoch transition - for i, data_batch in enumerate(self.data_loader): - self._inner_iter = i - self.call_hook('before_val_iter') - self.run_iter(data_batch, train_mode=False) - self.call_hook('after_val_iter') - - self.call_hook('after_val_epoch') - - def run(self, data_loaders, workflow, max_epochs=None, **kwargs): - """Start running. - - Args: - data_loaders (list[:obj:`DataLoader`]): Dataloaders for training - and validation. - workflow (list[tuple]): A list of (phase, epochs) to specify the - running order and epochs. E.g, [('train', 2), ('val', 1)] means - running 2 epochs for training and 1 epoch for validation, - iteratively. - """ - assert isinstance(data_loaders, list) - assert mmcv.is_list_of(workflow, tuple) - assert len(data_loaders) == len(workflow) - if max_epochs is not None: - warnings.warn( - 'setting max_epochs in run is deprecated, ' - 'please set max_epochs in runner_config', DeprecationWarning) - self._max_epochs = max_epochs - - assert self._max_epochs is not None, ( - 'max_epochs must be specified during instantiation') - - for i, flow in enumerate(workflow): - mode, epochs = flow - if mode == 'train': - self._max_iters = self._max_epochs * len(data_loaders[i]) - break - - work_dir = self.work_dir if self.work_dir is not None else 'NONE' - self.logger.info('Start running, host: %s, work_dir: %s', - get_host_info(), work_dir) - self.logger.info('Hooks will be executed in the following order:\n%s', - self.get_hook_info()) - self.logger.info('workflow: %s, max: %d epochs', workflow, - self._max_epochs) - self.call_hook('before_run') - - while self.epoch < self._max_epochs: - for i, flow in enumerate(workflow): - mode, epochs = flow - if isinstance(mode, str): # self.train() - if not hasattr(self, mode): - raise ValueError( - f'runner has no method named "{mode}" to run an ' - 'epoch') - epoch_runner = getattr(self, mode) - else: - raise TypeError( - 'mode in workflow must be a str, but got {}'.format( - type(mode))) - - for _ in range(epochs): - if mode == 'train' and self.epoch >= self._max_epochs: - break - epoch_runner(data_loaders[i], **kwargs) - - time.sleep(1) # wait for some hooks like loggers to finish - self.call_hook('after_run') - - def save_checkpoint(self, - out_dir, - filename_tmpl='epoch_{}.pth', - save_optimizer=True, - meta=None, - create_symlink=True): - """Save the checkpoint. - - Args: - out_dir (str): The directory that checkpoints are saved. - filename_tmpl (str, optional): The checkpoint filename template, - which contains a placeholder for the epoch number. - Defaults to 'epoch_{}.pth'. - save_optimizer (bool, optional): Whether to save the optimizer to - the checkpoint. Defaults to True. - meta (dict, optional): The meta information to be saved in the - checkpoint. Defaults to None. - create_symlink (bool, optional): Whether to create a symlink - "latest.pth" to point to the latest checkpoint. - Defaults to True. - """ - if meta is None: - meta = {} - elif not isinstance(meta, dict): - raise TypeError( - f'meta should be a dict or None, but got {type(meta)}') - if self.meta is not None: - meta.update(self.meta) - # Note: meta.update(self.meta) should be done before - # meta.update(epoch=self.epoch + 1, iter=self.iter) otherwise - # there will be problems with resumed checkpoints. - # More details in https://github.com/open-mmlab/mmcv/pull/1108 - meta.update(epoch=self.epoch + 1, iter=self.iter) - - filename = filename_tmpl.format(self.epoch + 1) - filepath = osp.join(out_dir, filename) - optimizer = self.optimizer if save_optimizer else None - save_checkpoint(self.model, filepath, optimizer=optimizer, meta=meta) - # in some environments, `os.symlink` is not supported, you may need to - # set `create_symlink` to False - if create_symlink: - dst_file = osp.join(out_dir, 'latest.pth') - if platform.system() != 'Windows': - mmcv.symlink(filename, dst_file) - else: - shutil.copy(filepath, dst_file) - - -@RUNNERS.register_module() -class Runner(EpochBasedRunner): - """Deprecated name of EpochBasedRunner.""" - - def __init__(self, *args, **kwargs): - warnings.warn( - 'Runner was deprecated, please use EpochBasedRunner instead') - super().__init__(*args, **kwargs) diff --git "a/spaces/ParagKesharDas360/MovieRecommadationApp/pages/4_\342\255\220_Rating Page.py" "b/spaces/ParagKesharDas360/MovieRecommadationApp/pages/4_\342\255\220_Rating Page.py" deleted file mode 100644 index 06cfb44af70da77d07267e40bcb2bce73007ad2b..0000000000000000000000000000000000000000 --- "a/spaces/ParagKesharDas360/MovieRecommadationApp/pages/4_\342\255\220_Rating Page.py" +++ /dev/null @@ -1,192 +0,0 @@ -import streamlit as st -import subprocess -import csv -from streamlit import cache -import os - -import re -from datetime import datetime -# import streamlit as st -import argparse -os.environ['TF_CPP_MIN_LOG_LEVEL']='2' - -import tensorflow as tf -from ipaddress import summarize_address_range -# import streamlit as st -import pandas as pd -import numpy as np -import pickle -import keras.optimizers -import keras.regularizers -from keras import layers -from sklearn.feature_extraction.text import TfidfVectorizer -from sklearn.feature_extraction.text import CountVectorizer -from sklearn.metrics.pairwise import cosine_similarity -from sklearn.metrics.pairwise import linear_kernel -import matplotlib.pyplot as plt -import requests -from typing import List -# @st.cache(suppress_st_warning=True, allow_output_mutation=True) - - -def load_movies(): - movies = pd.read_csv('movies.csv') - return movies -st.set_page_config(page_title="Login Page") - -if st.session_state["UserID"] == "" and st.session_state["UserName"] == "": - st.error("Login First!!") -else: - username = str(st.session_state["UserName"]) - user_id = int(st.session_state["UserID"]) - # print("Hi else") - - - # Open the login.csv file - with open('login.csv', encoding="cp437", errors='ignore') as login_file: - csv_reader = csv.reader(login_file) - found_match = False - movies = load_movies() - ratings = pd.read_csv('ratings.csv') - - # Load login.csv file and create a dictionary to map usernames to new_user_ids - login = pd.read_csv('login.csv') - - # Create a function to get the movieId based on the movie title - - - def find_movie_id_by_title(title): - with open('movies.csv', 'r',encoding='utf-8') as file: - reader = csv.reader(file) - next(reader) # skip header row - for row in reader: - if row[1] == title: - return row[0] - return None - - def find_movie_id_by_genres(title): - with open('movies.csv', 'r',encoding='utf-8') as file: - reader = csv.reader(file) - next(reader) # skip header row - for row in reader: - if row[1] == title: - return row[2] - return None - - def find_user_rating(user_id, movie_id): - user_ratings = ratings[(ratings['userId'] == user_id) & (ratings['movieId'] == movie_id)] - if len(user_ratings) > 0: - return user_ratings.index[0] - else: - return None -############################## - def getMovieTitle(title_with_year): - # Extracting the title - title = re.search(r"^(.*?)(?:,\s*The)?\s*\(", title_with_year).group(1).strip() - - # Extracting the year - year = re.search(r"\((\d{4})\)$", title_with_year).group(1) - - # Printing the results - print("Title:", title) - print("Year:", year) - og=title+'('+year+')' - return title,og - api_key = "3e67b4fa" - - def fetch_movie_poster(title): - url = f"http://www.omdbapi.com/?apikey={api_key}&t={title}" - response = requests.get(url) - data = response.json() - if "Poster" in data and data["Poster"] != "N/A": - return data["Poster"] - return "na1.png" - - def moviePoster(movie_title): - poster_url = fetch_movie_poster(movie_title) - if poster_url: - st.image(poster_url, caption=movie_title, use_column_width=True, width=200) - else: - st.image("na1.png", caption=movie_title, use_column_width=True, width=200) - - # st.set_page_config(page_title="Rate a Movie", page_icon=":movie_camera:", layout="wide", initial_sidebar_state="expanded") - - # Retrieve the username and user ID arguments - - username = str(username) - user_id = int(user_id) - - # Display the username and user ID - st.subheader(f"Welcome, {username} !!") - - - # Create the streamlit app - st.title('Rate a Movie') - - # Get the movie title from the user - # Get the new_user_id and movieId based on the inputs - - movie_title = st.selectbox("Movie Title", movies['title'].tolist()) - movie_id=find_movie_id_by_title(movie_title) - movie_genres=find_movie_id_by_genres(movie_title) - timestamp = str(int(datetime.now().timestamp())) - # st.text(movie_title+"'s movie_id is "+movie_id+" of genres "+movie_genres+timestamp) - row_index=find_user_rating(int(user_id),int(movie_id)) - if row_index: - prevoius_rating=ratings.iloc[row_index].loc['rating'] - else: - prevoius_rating=None - st.subheader("Movie Details") - st.info(f"Movie Title: {getMovieTitle(movie_title)[1] }") - st.info(f"Movie ID: {movie_id}") - st.info(f"Movie Genres: {movie_genres}") - st.info(f"Movie Rating:{prevoius_rating}") - MT=getMovieTitle(movie_title)[0] - st.image(fetch_movie_poster(MT),caption=MT) - # Center-align the element-container class - centering_css = """ - <style> - .etr89bj2 { - display: flex; - justify-content: center; - align-items: center; - hight:100vh - - } - </style> - """ - st.markdown(centering_css, unsafe_allow_html=True) - # Get the rating from the user - rating = st.slider('Rating', 1.0, 5.0, step=0.5) - col11, col22 = st.columns(2) - - rateBtn = col11.button("Rate Now") - unrateBtn = col22.button("Unrate Now") - - # st.text("Row Index is "+str(row_index)) - if rateBtn: - if row_index is not None: - # If the user has already rated this movie, update the rating in the ratings DataFrame - ratings.at[row_index, 'rating'] = rating - st.success(f"Rating has updated") - else: - # If the user has not yet rated this movie, add a new row to the ratings DataFrame - new_row = {'userId': user_id, 'movieId': movie_id, 'rating': rating, 'timestamp': int(datetime.now().timestamp())} - ratings = ratings.append(new_row, ignore_index=True) - st.success(f"New rating has added ") - - # Write the updated ratings DataFrame to the ratings.csv file - ratings.to_csv('ratings.csv', index=False) - - - if unrateBtn: - # If the "Unrate" button was clicked, delete the rating from the ratings DataFrame - if row_index is not None: - ratings.drop(row_index, inplace=True) - st.success(f"Rating has deleted") - else: - st.warning("No rating found to delete.") - - # Write the updated ratings DataFrame to the ratings.csv file - ratings.to_csv('ratings.csv', index=False) - \ No newline at end of file diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-2.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-2.go deleted file mode 100644 index 07a913e79669820d80213865fdba7e0a18bfbc09..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/srfi/srfi-2.go and /dev/null differ diff --git a/spaces/Pengyey/bingo-chuchu/tests/parse.ts b/spaces/Pengyey/bingo-chuchu/tests/parse.ts deleted file mode 100644 index 92940fe6315f1d7cb2b267ba5e5a7e26460a1de3..0000000000000000000000000000000000000000 --- a/spaces/Pengyey/bingo-chuchu/tests/parse.ts +++ /dev/null @@ -1,13 +0,0 @@ -import { promises as fs } from 'fs' -import { join } from 'path' -import { parseHeadersFromCurl } from '@/lib/utils' - -(async () => { - const content = await fs.readFile(join(__dirname, './fixtures/curl.txt'), 'utf-8') - const headers = parseHeadersFromCurl(content) - console.log(headers) - - const cmdContent = await fs.readFile(join(__dirname, './fixtures/cmd.txt'), 'utf-8') - const cmdHeaders = parseHeadersFromCurl(cmdContent) - console.log(cmdHeaders) -})() diff --git a/spaces/Plurigrid/LifeSim/src/components/ui/dialog.tsx b/spaces/Plurigrid/LifeSim/src/components/ui/dialog.tsx deleted file mode 100644 index c5621059f4149bbc1b008837dd68082c76a8a5c5..0000000000000000000000000000000000000000 --- a/spaces/Plurigrid/LifeSim/src/components/ui/dialog.tsx +++ /dev/null @@ -1,123 +0,0 @@ -"use client" - -import * as React from "react" -import * as DialogPrimitive from "@radix-ui/react-dialog" -import { X } from "lucide-react" - -import { cn } from "@/lib/utils" - -const Dialog = DialogPrimitive.Root - -const DialogTrigger = DialogPrimitive.Trigger - -const DialogPortal = ({ - className, - ...props -}: DialogPrimitive.DialogPortalProps) => ( - <DialogPrimitive.Portal className={cn(className)} {...props} /> -) -DialogPortal.displayName = DialogPrimitive.Portal.displayName - -const DialogOverlay = React.forwardRef< - React.ElementRef<typeof DialogPrimitive.Overlay>, - React.ComponentPropsWithoutRef<typeof DialogPrimitive.Overlay> ->(({ className, ...props }, ref) => ( - <DialogPrimitive.Overlay - ref={ref} - className={cn( - "fixed inset-0 z-50 bg-white/80 backdrop-blur-sm data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0 dark:bg-stone-950/80", - className - )} - {...props} - /> -)) -DialogOverlay.displayName = DialogPrimitive.Overlay.displayName - -const DialogContent = React.forwardRef< - React.ElementRef<typeof DialogPrimitive.Content>, - React.ComponentPropsWithoutRef<typeof DialogPrimitive.Content> ->(({ className, children, ...props }, ref) => ( - <DialogPortal> - <DialogOverlay /> - <DialogPrimitive.Content - ref={ref} - className={cn( - "fixed left-[50%] top-[50%] z-50 grid w-full max-w-lg translate-x-[-50%] translate-y-[-50%] gap-4 border border-stone-200 bg-white p-6 shadow-lg duration-200 data-[state=open]:animate-in data-[state=closed]:animate-out data-[state=closed]:fade-out-0 data-[state=open]:fade-in-0 data-[state=closed]:zoom-out-95 data-[state=open]:zoom-in-95 data-[state=closed]:slide-out-to-left-1/2 data-[state=closed]:slide-out-to-top-[48%] data-[state=open]:slide-in-from-left-1/2 data-[state=open]:slide-in-from-top-[48%] sm:rounded-lg md:w-full dark:border-stone-800 dark:bg-stone-950", - className - )} - {...props} - > - {children} - <DialogPrimitive.Close className="absolute right-4 top-4 rounded-sm opacity-70 ring-offset-white transition-opacity hover:opacity-100 focus:outline-none focus:ring-2 focus:ring-stone-400 focus:ring-offset-2 disabled:pointer-events-none data-[state=open]:bg-stone-100 data-[state=open]:text-stone-500 dark:ring-offset-stone-950 dark:focus:ring-stone-800 dark:data-[state=open]:bg-stone-800 dark:data-[state=open]:text-stone-400"> - <X className="h-4 w-4" /> - <span className="sr-only">Close</span> - </DialogPrimitive.Close> - </DialogPrimitive.Content> - </DialogPortal> -)) -DialogContent.displayName = DialogPrimitive.Content.displayName - -const DialogHeader = ({ - className, - ...props -}: React.HTMLAttributes<HTMLDivElement>) => ( - <div - className={cn( - "flex flex-col space-y-1.5 text-center sm:text-left", - className - )} - {...props} - /> -) -DialogHeader.displayName = "DialogHeader" - -const DialogFooter = ({ - className, - ...props -}: React.HTMLAttributes<HTMLDivElement>) => ( - <div - className={cn( - "flex flex-col-reverse sm:flex-row sm:justify-end sm:space-x-2", - className - )} - {...props} - /> -) -DialogFooter.displayName = "DialogFooter" - -const DialogTitle = React.forwardRef< - React.ElementRef<typeof DialogPrimitive.Title>, - React.ComponentPropsWithoutRef<typeof DialogPrimitive.Title> ->(({ className, ...props }, ref) => ( - <DialogPrimitive.Title - ref={ref} - className={cn( - "text-lg font-semibold leading-none tracking-tight", - className - )} - {...props} - /> -)) -DialogTitle.displayName = DialogPrimitive.Title.displayName - -const DialogDescription = React.forwardRef< - React.ElementRef<typeof DialogPrimitive.Description>, - React.ComponentPropsWithoutRef<typeof DialogPrimitive.Description> ->(({ className, ...props }, ref) => ( - <DialogPrimitive.Description - ref={ref} - className={cn("text-sm text-stone-500 dark:text-stone-400", className)} - {...props} - /> -)) -DialogDescription.displayName = DialogPrimitive.Description.displayName - -export { - Dialog, - DialogTrigger, - DialogContent, - DialogHeader, - DialogFooter, - DialogTitle, - DialogDescription, -} diff --git a/spaces/Podtekatel/Avatar2VSK/inference/onnx_model.py b/spaces/Podtekatel/Avatar2VSK/inference/onnx_model.py deleted file mode 100644 index b5097703ec79dab6e91be0f7117e3dd5a829f7dd..0000000000000000000000000000000000000000 --- a/spaces/Podtekatel/Avatar2VSK/inference/onnx_model.py +++ /dev/null @@ -1,14 +0,0 @@ -import numpy as np -import onnxruntime - - -class ONNXModel: - def __init__(self, onnx_mode_path): - self.path = onnx_mode_path - self.ort_session = onnxruntime.InferenceSession(str(self.path)) - self.input_name = self.ort_session.get_inputs()[0].name - - def __call__(self, img): - ort_inputs = {self.input_name: img.astype(dtype=np.float32)} - ort_outs = self.ort_session.run(None, ort_inputs)[0] - return ort_outs \ No newline at end of file diff --git a/spaces/Potanin/12345/lib/infer_pack/models.py b/spaces/Potanin/12345/lib/infer_pack/models.py deleted file mode 100644 index 3665d03bc0514a6ed07d3372ea24717dae1e0a65..0000000000000000000000000000000000000000 --- a/spaces/Potanin/12345/lib/infer_pack/models.py +++ /dev/null @@ -1,1142 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from lib.infer_pack import modules -from lib.infer_pack import attentions -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from lib.infer_pack.commons import init_weights -import numpy as np -from lib.infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - nsff0 = nsff0[:, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - nsff0 = nsff0[:, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, rate=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - if rate: - head = int(z_p.shape[2] * rate) - z_p = z_p[:, :, -head:] - x_mask = x_mask[:, :, -head:] - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec(z * x_mask, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_pick.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_pick.py deleted file mode 100644 index 4f6d8b2d79406012c5f8bae9c289ed5bf4d179cc..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_pick.py +++ /dev/null @@ -1,17 +0,0 @@ -from typing import Optional - - -def pick_bool(*values: Optional[bool]) -> bool: - """Pick the first non-none bool or return the last value. - - Args: - *values (bool): Any number of boolean or None values. - - Returns: - bool: First non-none boolean. - """ - assert values, "1 or more values required" - for value in values: - if value is not None: - return value - return bool(value) diff --git a/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/models/topic_fm.py b/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/models/topic_fm.py deleted file mode 100644 index 2556bdbb489574e13a5e5af60be87c546473d406..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/TopicFM/src/models/topic_fm.py +++ /dev/null @@ -1,98 +0,0 @@ -import torch -import torch.nn as nn -from einops.einops import rearrange - -from .backbone import build_backbone -from .modules import LocalFeatureTransformer, FinePreprocess, TopicFormer -from .utils.coarse_matching import CoarseMatching -from .utils.fine_matching import FineMatching - - -class TopicFM(nn.Module): - def __init__(self, config): - super().__init__() - # Misc - self.config = config - - # Modules - self.backbone = build_backbone(config) - - self.loftr_coarse = TopicFormer(config["coarse"]) - self.coarse_matching = CoarseMatching(config["match_coarse"]) - self.fine_preprocess = FinePreprocess(config) - self.loftr_fine = LocalFeatureTransformer(config["fine"]) - self.fine_matching = FineMatching() - - def forward(self, data): - """ - Update: - data (dict): { - 'image0': (torch.Tensor): (N, 1, H, W) - 'image1': (torch.Tensor): (N, 1, H, W) - 'mask0'(optional) : (torch.Tensor): (N, H, W) '0' indicates a padded position - 'mask1'(optional) : (torch.Tensor): (N, H, W) - } - """ - # 1. Local Feature CNN - data.update( - { - "bs": data["image0"].size(0), - "hw0_i": data["image0"].shape[2:], - "hw1_i": data["image1"].shape[2:], - } - ) - - if data["hw0_i"] == data["hw1_i"]: # faster & better BN convergence - feats_c, feats_f = self.backbone( - torch.cat([data["image0"], data["image1"]], dim=0) - ) - (feat_c0, feat_c1), (feat_f0, feat_f1) = feats_c.split( - data["bs"] - ), feats_f.split(data["bs"]) - else: # handle different input shapes - (feat_c0, feat_f0), (feat_c1, feat_f1) = self.backbone( - data["image0"] - ), self.backbone(data["image1"]) - - data.update( - { - "hw0_c": feat_c0.shape[2:], - "hw1_c": feat_c1.shape[2:], - "hw0_f": feat_f0.shape[2:], - "hw1_f": feat_f1.shape[2:], - } - ) - - # 2. coarse-level loftr module - feat_c0 = rearrange(feat_c0, "n c h w -> n (h w) c") - feat_c1 = rearrange(feat_c1, "n c h w -> n (h w) c") - - mask_c0 = mask_c1 = None # mask is useful in training - if "mask0" in data: - mask_c0, mask_c1 = data["mask0"].flatten(-2), data["mask1"].flatten(-2) - - feat_c0, feat_c1, conf_matrix, topic_matrix = self.loftr_coarse( - feat_c0, feat_c1, mask_c0, mask_c1 - ) - data.update({"conf_matrix": conf_matrix, "topic_matrix": topic_matrix}) ###### - - # 3. match coarse-level - self.coarse_matching(data) - - # 4. fine-level refinement - feat_f0_unfold, feat_f1_unfold = self.fine_preprocess( - feat_f0, feat_f1, feat_c0.detach(), feat_c1.detach(), data - ) - if feat_f0_unfold.size(0) != 0: # at least one coarse level predicted - feat_f0_unfold, feat_f1_unfold = self.loftr_fine( - feat_f0_unfold, feat_f1_unfold - ) - - # 5. match fine-level - self.fine_matching(feat_f0_unfold, feat_f1_unfold, data) - - def load_state_dict(self, state_dict, *args, **kwargs): - for k in list(state_dict.keys()): - if k.startswith("matcher."): - state_dict[k.replace("matcher.", "", 1)] = state_dict.pop(k) - return super().load_state_dict(state_dict, *args, **kwargs) diff --git a/spaces/Realcat/image-matching-webui/third_party/r2d2/download_training_data.sh b/spaces/Realcat/image-matching-webui/third_party/r2d2/download_training_data.sh deleted file mode 100644 index 8257c83ef70eeab47b6b344d591ddef86ba848cd..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/r2d2/download_training_data.sh +++ /dev/null @@ -1,69 +0,0 @@ -# Copyright 2019-present NAVER Corp. -# CC BY-NC-SA 3.0 -# Available only for non-commercial use - -CODE_ROOT=`pwd` -if [ ! -e data ]; then - echo "Error: missing data/ folder" - echo "First, create a folder that can host (at least) 15 GB of data." - echo "Then, create a soft-link named 'data' that points to it." - exit -1 -fi - -# download web images from the revisitop1m dataset -WEB_ROOT=data/revisitop1m -mkdir -p $WEB_ROOT -cd $WEB_ROOT -if [ ! -e 0d3 ]; then - for i in {1..5}; do - echo "Installing the web images dataset ($i/5)..." - if [ ! -f revisitop1m.$i.tar.gz ]; then - wget http://ptak.felk.cvut.cz/revisitop/revisitop1m/jpg/revisitop1m.$i.tar.gz - fi - tar -xzvf revisitop1m.$i.tar.gz - rm -f revisitop1m.$i.tar.gz - done -fi -cd $CODE_ROOT - -# download aachen images -AACHEN_ROOT=data/aachen -mkdir -p $AACHEN_ROOT -cd $AACHEN_ROOT -if [ ! -e "images_upright" ]; then - echo "Installing the Aachen dataset..." - fname=database_and_query_images.zip - if [ ! -f $fname ]; then - echo "File not found: $fname" - exit -1 - else - unzip $fname - rm -f $fname - fi -fi - -# download style transfer images -if [ ! -e "style_transfer" ]; then - echo "Installing the Aachen style-transfer dataset..." - fname=aachen_style_transfer.zip - if [ ! -f $fname ]; then - wget http://download.europe.naverlabs.com/3DVision/aachen_style_transfer.zip $fname - fi - unzip $fname - rm -f $fname -fi - -# download optical flow pairs -if [ ! -e "optical_flow" ]; then - echo "Installing the Aachen optical flow dataset..." - fname=aachen_optical_flow.zip - if [ ! -f $fname ]; then - wget http://download.europe.naverlabs.com/3DVision/aachen_optical_flow.zip $fname - fi - unzip $fname - rm -f $fname -fi -cd $CODE_ROOT - -echo "Done!" - diff --git a/spaces/Reha2704/VToonify/vtoonify/model/stylegan/__init__.py b/spaces/Reha2704/VToonify/vtoonify/model/stylegan/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Ritori/play_with_baby_llama2/import subprocess.py b/spaces/Ritori/play_with_baby_llama2/import subprocess.py deleted file mode 100644 index 9bb9fd7b16ec45eec255633c4f1f73831524c81f..0000000000000000000000000000000000000000 --- a/spaces/Ritori/play_with_baby_llama2/import subprocess.py +++ /dev/null @@ -1,29 +0,0 @@ -import subprocess - -# Define the command as a list -import gradio as gr - -def generate_text(): - - - cmd = ['./run', 'model.bin'] - - # Use subprocess.run to execute the command - result = subprocess.run(cmd, stdout=subprocess.PIPE, text=True) - # 在这里运行你的模型,生成文本 - # 以下是一个示例 - text = "这是生成的文本。" - title="运行来让小羊驼跑起来" - return result - -iface = gr.Interface(fn=generate_text, inputs=[], outputs="text",submit_label="让小羊驼跑起来",title="和小羊驼一起玩") -iface.launch() - - -cmd = ['./run', 'model.bin'] - -# Use subprocess.run to execute the command -result = subprocess.run(cmd, stdout=subprocess.PIPE, text=True) - -# Print the output -print(result.stdout) diff --git a/spaces/Rmpmartinspro2/EimisAnimeDiffusion_1.0v/app.py b/spaces/Rmpmartinspro2/EimisAnimeDiffusion_1.0v/app.py deleted file mode 100644 index 48cb4dc74dcf2841f40d4ee0fdfd720ef4834584..0000000000000000000000000000000000000000 --- a/spaces/Rmpmartinspro2/EimisAnimeDiffusion_1.0v/app.py +++ /dev/null @@ -1,15 +0,0 @@ -import os -import gradio as gr - -API_KEY=os.environ.get('HUGGING_FACE_HUB_TOKEN', None) - -article = """--- -This space was created using [SD Space Creator](https://huggingface.co/spaces/anzorq/sd-space-creator).""" - -gr.Interface.load( - name="models/eimiss/EimisAnimeDiffusion_1.0v", - title="""Eimisanimediffusion 1.0v""", - description="""Demo for <a href="https://huggingface.co/eimiss/EimisAnimeDiffusion_1.0v">Eimisanimediffusion 1.0v</a> Stable Diffusion model.""", - article=article, - api_key=API_KEY, - ).queue(concurrency_count=20).launch() diff --git a/spaces/RobLi/ControlNet-v1-1/app_openpose.py b/spaces/RobLi/ControlNet-v1-1/app_openpose.py deleted file mode 100644 index a4dd2aa4a97d9526e239633e95fdd0d6162ffe9d..0000000000000000000000000000000000000000 --- a/spaces/RobLi/ControlNet-v1-1/app_openpose.py +++ /dev/null @@ -1,104 +0,0 @@ -#!/usr/bin/env python - -import gradio as gr - -from utils import randomize_seed_fn - - -def create_demo(process, max_images=12, default_num_images=3): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - image = gr.Image() - prompt = gr.Textbox(label='Prompt') - run_button = gr.Button(label='Run') - with gr.Accordion('Advanced options', open=False): - preprocessor_name = gr.Radio(label='Preprocessor', - choices=['Openpose', 'None'], - type='value', - value='Openpose') - num_samples = gr.Slider(label='Number of images', - minimum=1, - maximum=max_images, - value=default_num_images, - step=1) - image_resolution = gr.Slider(label='Image resolution', - minimum=256, - maximum=512, - value=512, - step=256) - preprocess_resolution = gr.Slider( - label='Preprocess resolution', - minimum=128, - maximum=512, - value=512, - step=1) - num_steps = gr.Slider(label='Number of steps', - minimum=1, - maximum=100, - value=20, - step=1) - guidance_scale = gr.Slider(label='Guidance scale', - minimum=0.1, - maximum=30.0, - value=9.0, - step=0.1) - seed = gr.Slider(label='Seed', - minimum=0, - maximum=1000000, - step=1, - value=0, - randomize=True) - randomize_seed = gr.Checkbox(label='Randomize seed', - value=True) - a_prompt = gr.Textbox( - label='Additional prompt', - value='best quality, extremely detailed') - n_prompt = gr.Textbox( - label='Negative prompt', - value= - 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality' - ) - with gr.Column(): - result = gr.Gallery(label='Output', show_label=False).style( - columns=2, object_fit='scale-down') - inputs = [ - image, - prompt, - a_prompt, - n_prompt, - num_samples, - image_resolution, - preprocess_resolution, - num_steps, - guidance_scale, - seed, - preprocessor_name, - ] - prompt.submit( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - ).then( - fn=process, - inputs=inputs, - outputs=result, - ) - run_button.click( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - ).then( - fn=process, - inputs=inputs, - outputs=result, - api_name='openpose', - ) - return demo - - -if __name__ == '__main__': - from model import Model - model = Model(task_name='Openpose') - demo = create_demo(model.process_openpose) - demo.queue().launch() diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/visualization/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/visualization/__init__.py deleted file mode 100644 index 4ff995c0861490941f8cfc19ebbd41a2ee7e2d65..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/core/visualization/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .image import (color_val_matplotlib, imshow_det_bboxes, - imshow_gt_det_bboxes) - -__all__ = ['imshow_det_bboxes', 'imshow_gt_det_bboxes', 'color_val_matplotlib'] diff --git a/spaces/Salesforce/BLIP/models/blip_retrieval.py b/spaces/Salesforce/BLIP/models/blip_retrieval.py deleted file mode 100644 index bc645f5ec3c2a17851bf6f54be6d97b1336b3c0a..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/BLIP/models/blip_retrieval.py +++ /dev/null @@ -1,322 +0,0 @@ -from models.med import BertConfig, BertModel -from transformers import BertTokenizer - -import torch -from torch import nn -import torch.nn.functional as F - -from models.blip import create_vit, init_tokenizer, load_checkpoint - -class BLIP_Retrieval(nn.Module): - def __init__(self, - med_config = 'configs/med_config.json', - image_size = 384, - vit = 'base', - vit_grad_ckpt = False, - vit_ckpt_layer = 0, - embed_dim = 256, - queue_size = 57600, - momentum = 0.995, - negative_all_rank = False, - ): - """ - Args: - med_config (str): path for the mixture of encoder-decoder model's configuration file - image_size (int): input image size - vit (str): model size of vision transformer - """ - super().__init__() - - self.visual_encoder, vision_width = create_vit(vit,image_size, vit_grad_ckpt, vit_ckpt_layer) - self.tokenizer = init_tokenizer() - med_config = BertConfig.from_json_file(med_config) - med_config.encoder_width = vision_width - self.text_encoder = BertModel(config=med_config, add_pooling_layer=False) - - text_width = self.text_encoder.config.hidden_size - - self.vision_proj = nn.Linear(vision_width, embed_dim) - self.text_proj = nn.Linear(text_width, embed_dim) - - self.itm_head = nn.Linear(text_width, 2) - - # create momentum encoders - self.visual_encoder_m, vision_width = create_vit(vit,image_size) - self.vision_proj_m = nn.Linear(vision_width, embed_dim) - self.text_encoder_m = BertModel(config=med_config, add_pooling_layer=False) - self.text_proj_m = nn.Linear(text_width, embed_dim) - - self.model_pairs = [[self.visual_encoder,self.visual_encoder_m], - [self.vision_proj,self.vision_proj_m], - [self.text_encoder,self.text_encoder_m], - [self.text_proj,self.text_proj_m], - ] - self.copy_params() - - # create the queue - self.register_buffer("image_queue", torch.randn(embed_dim, queue_size)) - self.register_buffer("text_queue", torch.randn(embed_dim, queue_size)) - self.register_buffer("idx_queue", torch.full((1,queue_size),-100)) - self.register_buffer("ptr_queue", torch.zeros(1, dtype=torch.long)) - - self.image_queue = nn.functional.normalize(self.image_queue, dim=0) - self.text_queue = nn.functional.normalize(self.text_queue, dim=0) - - self.queue_size = queue_size - self.momentum = momentum - self.temp = nn.Parameter(0.07*torch.ones([])) - - self.negative_all_rank = negative_all_rank - - - def forward(self, image, caption, alpha, idx): - with torch.no_grad(): - self.temp.clamp_(0.001,0.5) - - image_embeds = self.visual_encoder(image) - image_atts = torch.ones(image_embeds.size()[:-1],dtype=torch.long).to(image.device) - image_feat = F.normalize(self.vision_proj(image_embeds[:,0,:]),dim=-1) - - text = self.tokenizer(caption, padding='max_length', truncation=True, max_length=35, - return_tensors="pt").to(image.device) - - text_output = self.text_encoder(text.input_ids, attention_mask = text.attention_mask, - return_dict = True, mode = 'text') - text_feat = F.normalize(self.text_proj(text_output.last_hidden_state[:,0,:]),dim=-1) - - ###============== Image-text Contrastive Learning ===================### - idx = idx.view(-1,1) - idx_all = torch.cat([idx.t(), self.idx_queue.clone().detach()],dim=1) - pos_idx = torch.eq(idx, idx_all).float() - sim_targets = pos_idx / pos_idx.sum(1,keepdim=True) - - # get momentum features - with torch.no_grad(): - self._momentum_update() - image_embeds_m = self.visual_encoder_m(image) - image_feat_m = F.normalize(self.vision_proj_m(image_embeds_m[:,0,:]),dim=-1) - image_feat_m_all = torch.cat([image_feat_m.t(),self.image_queue.clone().detach()],dim=1) - - text_output_m = self.text_encoder_m(text.input_ids, attention_mask = text.attention_mask, - return_dict = True, mode = 'text') - text_feat_m = F.normalize(self.text_proj_m(text_output_m.last_hidden_state[:,0,:]),dim=-1) - text_feat_m_all = torch.cat([text_feat_m.t(),self.text_queue.clone().detach()],dim=1) - - sim_i2t_m = image_feat_m @ text_feat_m_all / self.temp - sim_t2i_m = text_feat_m @ image_feat_m_all / self.temp - - sim_targets = torch.zeros(sim_i2t_m.size()).to(image.device) - sim_targets.fill_diagonal_(1) - - sim_i2t_targets = alpha * F.softmax(sim_i2t_m, dim=1) + (1 - alpha) * sim_targets - sim_t2i_targets = alpha * F.softmax(sim_t2i_m, dim=1) + (1 - alpha) * sim_targets - - sim_i2t = image_feat @ text_feat_m_all / self.temp - sim_t2i = text_feat @ image_feat_m_all / self.temp - - loss_i2t = -torch.sum(F.log_softmax(sim_i2t, dim=1)*sim_i2t_targets,dim=1).mean() - loss_t2i = -torch.sum(F.log_softmax(sim_t2i, dim=1)*sim_t2i_targets,dim=1).mean() - - loss_ita = (loss_i2t+loss_t2i)/2 - - idxs = concat_all_gather(idx) - self._dequeue_and_enqueue(image_feat_m, text_feat_m, idxs) - - ###============== Image-text Matching ===================### - encoder_input_ids = text.input_ids.clone() - encoder_input_ids[:,0] = self.tokenizer.enc_token_id - - # forward the positve image-text pair - bs = image.size(0) - output_pos = self.text_encoder(encoder_input_ids, - attention_mask = text.attention_mask, - encoder_hidden_states = image_embeds, - encoder_attention_mask = image_atts, - return_dict = True, - ) - - - if self.negative_all_rank: - # compute sample similarity - with torch.no_grad(): - mask = torch.eq(idx, idxs.t()) - - image_feat_world = concat_all_gather(image_feat) - text_feat_world = concat_all_gather(text_feat) - - sim_i2t = image_feat @ text_feat_world.t() / self.temp - sim_t2i = text_feat @ image_feat_world.t() / self.temp - - weights_i2t = F.softmax(sim_i2t,dim=1) - weights_i2t.masked_fill_(mask, 0) - - weights_t2i = F.softmax(sim_t2i,dim=1) - weights_t2i.masked_fill_(mask, 0) - - image_embeds_world = all_gather_with_grad(image_embeds) - - # select a negative image (from all ranks) for each text - image_embeds_neg = [] - for b in range(bs): - neg_idx = torch.multinomial(weights_t2i[b], 1).item() - image_embeds_neg.append(image_embeds_world[neg_idx]) - image_embeds_neg = torch.stack(image_embeds_neg,dim=0) - - # select a negative text (from all ranks) for each image - input_ids_world = concat_all_gather(encoder_input_ids) - att_mask_world = concat_all_gather(text.attention_mask) - - text_ids_neg = [] - text_atts_neg = [] - for b in range(bs): - neg_idx = torch.multinomial(weights_i2t[b], 1).item() - text_ids_neg.append(input_ids_world[neg_idx]) - text_atts_neg.append(att_mask_world[neg_idx]) - - else: - with torch.no_grad(): - mask = torch.eq(idx, idx.t()) - - sim_i2t = image_feat @ text_feat.t() / self.temp - sim_t2i = text_feat @ image_feat.t() / self.temp - - weights_i2t = F.softmax(sim_i2t,dim=1) - weights_i2t.masked_fill_(mask, 0) - - weights_t2i = F.softmax(sim_t2i,dim=1) - weights_t2i.masked_fill_(mask, 0) - - # select a negative image (from same rank) for each text - image_embeds_neg = [] - for b in range(bs): - neg_idx = torch.multinomial(weights_t2i[b], 1).item() - image_embeds_neg.append(image_embeds[neg_idx]) - image_embeds_neg = torch.stack(image_embeds_neg,dim=0) - - # select a negative text (from same rank) for each image - text_ids_neg = [] - text_atts_neg = [] - for b in range(bs): - neg_idx = torch.multinomial(weights_i2t[b], 1).item() - text_ids_neg.append(encoder_input_ids[neg_idx]) - text_atts_neg.append(text.attention_mask[neg_idx]) - - text_ids_neg = torch.stack(text_ids_neg,dim=0) - text_atts_neg = torch.stack(text_atts_neg,dim=0) - - text_ids_all = torch.cat([encoder_input_ids, text_ids_neg],dim=0) - text_atts_all = torch.cat([text.attention_mask, text_atts_neg],dim=0) - - image_embeds_all = torch.cat([image_embeds_neg,image_embeds],dim=0) - image_atts_all = torch.cat([image_atts,image_atts],dim=0) - - output_neg = self.text_encoder(text_ids_all, - attention_mask = text_atts_all, - encoder_hidden_states = image_embeds_all, - encoder_attention_mask = image_atts_all, - return_dict = True, - ) - - - vl_embeddings = torch.cat([output_pos.last_hidden_state[:,0,:], output_neg.last_hidden_state[:,0,:]],dim=0) - vl_output = self.itm_head(vl_embeddings) - - itm_labels = torch.cat([torch.ones(bs,dtype=torch.long),torch.zeros(2*bs,dtype=torch.long)], - dim=0).to(image.device) - loss_itm = F.cross_entropy(vl_output, itm_labels) - - return loss_ita, loss_itm - - - @torch.no_grad() - def copy_params(self): - for model_pair in self.model_pairs: - for param, param_m in zip(model_pair[0].parameters(), model_pair[1].parameters()): - param_m.data.copy_(param.data) # initialize - param_m.requires_grad = False # not update by gradient - - - @torch.no_grad() - def _momentum_update(self): - for model_pair in self.model_pairs: - for param, param_m in zip(model_pair[0].parameters(), model_pair[1].parameters()): - param_m.data = param_m.data * self.momentum + param.data * (1. - self.momentum) - - - @torch.no_grad() - def _dequeue_and_enqueue(self, image_feat, text_feat, idxs): - # gather keys before updating queue - image_feats = concat_all_gather(image_feat) - text_feats = concat_all_gather(text_feat) - - - batch_size = image_feats.shape[0] - - ptr = int(self.ptr_queue) - assert self.queue_size % batch_size == 0 # for simplicity - - # replace the keys at ptr (dequeue and enqueue) - self.image_queue[:, ptr:ptr + batch_size] = image_feats.T - self.text_queue[:, ptr:ptr + batch_size] = text_feats.T - self.idx_queue[:, ptr:ptr + batch_size] = idxs.T - ptr = (ptr + batch_size) % self.queue_size # move pointer - - self.ptr_queue[0] = ptr - - -def blip_retrieval(pretrained='',**kwargs): - model = BLIP_Retrieval(**kwargs) - if pretrained: - model,msg = load_checkpoint(model,pretrained) - print("missing keys:") - print(msg.missing_keys) - return model - - -@torch.no_grad() -def concat_all_gather(tensor): - """ - Performs all_gather operation on the provided tensors. - *** Warning ***: torch.distributed.all_gather has no gradient. - """ - tensors_gather = [torch.ones_like(tensor) - for _ in range(torch.distributed.get_world_size())] - torch.distributed.all_gather(tensors_gather, tensor, async_op=False) - - output = torch.cat(tensors_gather, dim=0) - return output - - -class GatherLayer(torch.autograd.Function): - """ - Gather tensors from all workers with support for backward propagation: - This implementation does not cut the gradients as torch.distributed.all_gather does. - """ - - @staticmethod - def forward(ctx, x): - output = [torch.zeros_like(x) for _ in range(torch.distributed.get_world_size())] - torch.distributed.all_gather(output, x) - return tuple(output) - - @staticmethod - def backward(ctx, *grads): - all_gradients = torch.stack(grads) - torch.distributed.all_reduce(all_gradients) - return all_gradients[torch.distributed.get_rank()] - - -def all_gather_with_grad(tensors): - """ - Performs all_gather operation on the provided tensors. - Graph remains connected for backward grad computation. - """ - # Queue the gathered tensors - world_size = torch.distributed.get_world_size() - # There is no need for reduction in the single-proc case - if world_size == 1: - return tensors - - tensor_all = GatherLayer.apply(tensors) - - return torch.cat(tensor_all, dim=0) diff --git a/spaces/Sambhavnoobcoder/StyleForge/Utils.py b/spaces/Sambhavnoobcoder/StyleForge/Utils.py deleted file mode 100644 index 356faa8bb66430ee9640df40a5b53399dd02a9f1..0000000000000000000000000000000000000000 --- a/spaces/Sambhavnoobcoder/StyleForge/Utils.py +++ /dev/null @@ -1,88 +0,0 @@ -from diffusers import AutoencoderKL, UNet2DConditionModel, LMSDiscreteScheduler -from transformers import CLIPTextModel, CLIPTokenizer -from tqdm.auto import tqdm -from PIL import Image -import torch - -class MingleModel: - - def __init__(self): - # Set device - self.torch_device = "cuda" if torch.cuda.is_available() else "cpu" - # Load the autoencoder model which will be used to decode the latents into image space. - use_auth_token = "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" - self.vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae", - use_auth_token=use_auth_token).to(self.torch_device) - - # Load the tokenizer and text encoder to tokenize and encode the text. - self.tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14", use_auth_token=use_auth_token) - self.text_encoder = CLIPTextModel.from_pretrained("openai/clip-vit-large-patch14", use_auth_token=use_auth_token).to(self.torch_device) - - # # The UNet model for generating the latents. - self.unet = UNet2DConditionModel.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="unet",use_auth_token=use_auth_token).to(self.torch_device) - - # The noise scheduler - self.scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", - num_train_timesteps=1000) - - def do_tokenizer(self, prompt): - return self.tokenizer([prompt], padding="max_length", max_length=self.tokenizer.model_max_length, truncation=True, - return_tensors="pt") - - def get_text_encoder(self, text_input): - return self.text_encoder(text_input.input_ids.to(self.torch_device))[0] - - def latents_to_pil(self, latents): - # bath of latents -> list of images - latents = (1 / 0.18215) * latents - with torch.no_grad(): - image = self.vae.decode(latents).sample - image = (image / 2 + 0.5).clamp(0, 1) - image = image.detach().cpu().permute(0, 2, 3, 1).numpy() - images = (image * 255).round().astype("uint8") - pil_images = [Image.fromarray(image) for image in images] - return pil_images - - def generate_with_embs(self, text_embeddings, generator_int=32, num_inference_steps=30, guidance_scale=7.5): - height = 512 # default height of Stable Diffusion - width = 512 # default width of Stable Diffusion - num_inference_steps = num_inference_steps # Number of denoising steps - guidance_scale = guidance_scale # Scale for classifier-free guidance - generator = torch.manual_seed(generator_int) # Seed generator to create the inital latent noise - batch_size = 1 - - max_length = 77 - uncond_input = self.tokenizer( - [""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt" - ) - with torch.no_grad(): - uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.torch_device))[0] - text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) - - # Prep Scheduler - self.scheduler.set_timesteps(num_inference_steps) - - # Prep latents - latents = torch.randn((batch_size, self.unet.in_channels, height // 8, width // 8), generator=generator) - latents = latents.to(self.torch_device) - latents = latents * self.scheduler.init_noise_sigma - - # Loop - for i, t in tqdm(enumerate(self.scheduler.timesteps)): - # expand the latents if we are doing classifier-free guidance to avoid doing two forward passes. - latent_model_input = torch.cat([latents] * 2) - sigma = self.scheduler.sigmas[i] - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - with torch.no_grad(): - noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings)["sample"] - - # perform guidance - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents).prev_sample - - return self.latents_to_pil(latents)[0] \ No newline at end of file diff --git a/spaces/Sanathkumar1603/hackathon/app/config.py b/spaces/Sanathkumar1603/hackathon/app/config.py deleted file mode 100644 index 3260516aea6146a34d08bd8a3726e6731176c425..0000000000000000000000000000000000000000 --- a/spaces/Sanathkumar1603/hackathon/app/config.py +++ /dev/null @@ -1,25 +0,0 @@ -import sys -from typing import List - -from pydantic import AnyHttpUrl, BaseSettings - -class Settings(BaseSettings): - API_V1_STR: str = "/api/v1" - - # Meta - - # BACKEND_CORS_ORIGINS is a comma-separated list of origins - # e.g: http://localhost,http://localhost:4200,http://localhost:3000 - BACKEND_CORS_ORIGINS: List[AnyHttpUrl] = [ - "http://localhost:3000", # type: ignore - "http://localhost:8000", # type: ignore - "https://localhost:3000", # type: ignore - "https://localhost:8000", # type: ignore - ] - - PROJECT_NAME: str = "Recognition API" - - class Config: - case_sensitive = True - -settings = Settings() diff --git a/spaces/Sandiago21/text-to-speech-italian/app.py b/spaces/Sandiago21/text-to-speech-italian/app.py deleted file mode 100644 index 589aeb6f31a40a7ba159ed9102b4d1119db14f15..0000000000000000000000000000000000000000 --- a/spaces/Sandiago21/text-to-speech-italian/app.py +++ /dev/null @@ -1,58 +0,0 @@ -import gradio as gr -import torch -from datasets import load_dataset -from transformers import pipeline, SpeechT5Processor, SpeechT5HifiGan, SpeechT5ForTextToSpeech - -model_id = "Sandiago21/speecht5_finetuned_voxpopuli_it" # update with your model id -# pipe = pipeline("automatic-speech-recognition", model=model_id) -model = SpeechT5ForTextToSpeech.from_pretrained(model_id) -vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan") -embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation") -speaker_embeddings = torch.tensor(embeddings_dataset[7440]["xvector"]).unsqueeze(0) - -checkpoint = "microsoft/speecht5_tts" -processor = SpeechT5Processor.from_pretrained(checkpoint) - -replacements = [ - ("á", "a"), - ("ç", "c"), - ("è", "e"), - ("ì", "i"), - ("í", "i"), - ("ò", "o"), - ("ó", "o"), - ("ù", "u"), - ("ú", "u"), - ("š", "s"), - ("ï", "i"), -] - -title = "Text-to-Speech" -description = """ -Demo for text-to-speech translation in Italian. Demo uses [Sandiago21/speecht5_finetuned_voxpopuli_it](https://huggingface.co/Sandiago21/speecht5_finetuned_voxpopuli_it) checkpoint, which is based on Microsoft's -[SpeechT5 TTS](https://huggingface.co/microsoft/speecht5_tts) model and is fine-tuned in Italian Audio dataset -![Text-to-Speech (TTS)"](https://geekflare.com/wp-content/uploads/2021/07/texttospeech-1200x385.png "Diagram of Text-to-Speech (TTS)") -""" - -def cleanup_text(text): - for src, dst in replacements: - text = text.replace(src, dst) - return text - -def synthesize_speech(text): - text = cleanup_text(text) - inputs = processor(text=text, return_tensors="pt") - - speech = model.generate_speech(inputs["input_ids"], speaker_embeddings, vocoder=vocoder) - - return gr.Audio.update(value=(16000, speech.cpu().numpy())) - -syntesize_speech_gradio = gr.Interface( - synthesize_speech, - inputs = gr.Textbox(label="Text", placeholder="Type something here..."), - outputs=gr.Audio(), - examples=["Grandi manifestazioni in polonia al momento della firma dell'accordo che hanno portato il governo polacco a decidere di non ratificare l'accordo per il momento"], - title=title, - description=description, -).launch() - diff --git a/spaces/Sarath2002/YouTube_Video_Summarizer/app.py b/spaces/Sarath2002/YouTube_Video_Summarizer/app.py deleted file mode 100644 index e98007a7cdb2bd541a51e0d59de04079f7546a07..0000000000000000000000000000000000000000 --- a/spaces/Sarath2002/YouTube_Video_Summarizer/app.py +++ /dev/null @@ -1,57 +0,0 @@ -import gradio as gr -from support import * -import os -import nltk -nltk.download('punkt') -os.system("pip install -r requirements.txt") - -description = """ -Repo on Github --> YouTube_Summarizer_using_BART --> https://github.com/Sarath1729-2002/YouTube_Summarizer_using_BART - - -🎯 Input the link of the video to be summarized, and click ''submit''' to get the summary - - -⌛️ It takes about 2 to 15 mins to generate detection results, depending on the video size. Best to use videos shorter than 10 mins for optimum efficiency - - -🏠 Uses extractive summarisation by TextRank in step 1 to get the top-priority sentences, then using BART summarizer, this intermediate summary is then summarised within the given max_length - - -Try these sample videos if in a hurry 😊 - ---> https://youtu.be/s4g1XFU8Gto ---> https://youtu.be/L5YHlH0f4Uw - -""" - - -def initialize(link, max_length,min_length): - - path = "plaintext.txt" - url = link - - video = get_vidid(url) - vid_transcript(video) - - temp_summ = ext_summarizer(path) - summ = abs_summarizer(temp_summ, max_length) - - return summ - - - - - -interface = gr.Interface( - inputs=[ - gr.inputs.Textbox(lines=2,label="YouTube Link"), - gr.Slider(minimum=200,maximum=500,default=350,label="Max Length") - ], - outputs=[ - gr.outputs.Textbox(label="Summarised YouTube Video "), - - ], - fn=initialize, - description=description) -interface.launch() \ No newline at end of file diff --git a/spaces/Saturdays/spanish-quechua-detector/app.py b/spaces/Saturdays/spanish-quechua-detector/app.py deleted file mode 100644 index 84ab50ee4284bcc9db2b405d91f537b41c0cf714..0000000000000000000000000000000000000000 --- a/spaces/Saturdays/spanish-quechua-detector/app.py +++ /dev/null @@ -1,32 +0,0 @@ - -import pickle -import gradio as gr -from classifier import Classifier - -def classify(txt): - with open('classifier.pickle', 'rb') as f: - classifier = pickle.load(f) - return classifier.predict(txt) - - -title = 'Detector de Quechua y Español' -description =( 'Bolivia lucha para que no desaparezcan los idiomas indígenas, sin embargo,' + - 'es aún muy complicado acceder a recursos que ayuden a su asimilación y aprendizaje.' + - 'Presentamos una herramienta de clasificación de idiomas, que si bien es una tarea sencilla, ' + - 'resulta esencial para realizar tareas más complejas como la traducción automática.' - ) -article = 'Demo del proyecto para Saturdays.\nAutores del modelo: Cota V. Andreina, Cusi L. Evelyn, Nina M. Juan Dilan' - -iface = gr.Interface( - fn=classify, - inputs= gr.inputs.Textbox(lines=3, label='TEXTO', placeholder='Introduzca un texto'), - outputs= gr.outputs.Textbox(label='IDIOMA'), - examples = ['¿Maytaq ashkallanchikega?', 'Entonces el Inka dijo ¡Mach\'a!', '¡Aragan kanki wamraqa!', 'Señora, ¿yanapariwayta atiwaqchu?', '¿A dónde vas?'], - description = description, - title = title, - article = article, - theme = 'peach' - ) - -iface.launch() - diff --git a/spaces/ServerX/PorcoDiaz/infer/lib/train/utils.py b/spaces/ServerX/PorcoDiaz/infer/lib/train/utils.py deleted file mode 100644 index dd965fc4dd2af09e445a7f625f2681460874da7a..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/infer/lib/train/utils.py +++ /dev/null @@ -1,478 +0,0 @@ -import argparse -import glob -import json -import logging -import os -import subprocess -import sys -import shutil - -import numpy as np -import torch -from scipy.io.wavfile import read - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint_d(checkpoint_path, combd, sbd, optimizer=None, load_opt=1): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location="cpu") - - ################## - def go(model, bkey): - saved_state_dict = checkpoint_dict[bkey] - if hasattr(model, "module"): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): # 模型需要的shape - try: - new_state_dict[k] = saved_state_dict[k] - if saved_state_dict[k].shape != state_dict[k].shape: - logger.warn( - "shape-%s-mismatch. need: %s, get: %s", - k, - state_dict[k].shape, - saved_state_dict[k].shape, - ) # - raise KeyError - except: - # logger.info(traceback.format_exc()) - logger.info("%s is not in the checkpoint", k) # pretrain缺失的 - new_state_dict[k] = v # 模型自带的随机值 - if hasattr(model, "module"): - model.module.load_state_dict(new_state_dict, strict=False) - else: - model.load_state_dict(new_state_dict, strict=False) - return model - - go(combd, "combd") - model = go(sbd, "sbd") - ############# - logger.info("Loaded model weights") - - iteration = checkpoint_dict["iteration"] - learning_rate = checkpoint_dict["learning_rate"] - if ( - optimizer is not None and load_opt == 1 - ): ###加载不了,如果是空的的话,重新初始化,可能还会影响lr时间表的更新,因此在train文件最外围catch - # try: - optimizer.load_state_dict(checkpoint_dict["optimizer"]) - # except: - # traceback.print_exc() - logger.info("Loaded checkpoint '{}' (epoch {})".format(checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -# def load_checkpoint(checkpoint_path, model, optimizer=None): -# assert os.path.isfile(checkpoint_path) -# checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') -# iteration = checkpoint_dict['iteration'] -# learning_rate = checkpoint_dict['learning_rate'] -# if optimizer is not None: -# optimizer.load_state_dict(checkpoint_dict['optimizer']) -# # print(1111) -# saved_state_dict = checkpoint_dict['model'] -# # print(1111) -# -# if hasattr(model, 'module'): -# state_dict = model.module.state_dict() -# else: -# state_dict = model.state_dict() -# new_state_dict= {} -# for k, v in state_dict.items(): -# try: -# new_state_dict[k] = saved_state_dict[k] -# except: -# logger.info("%s is not in the checkpoint" % k) -# new_state_dict[k] = v -# if hasattr(model, 'module'): -# model.module.load_state_dict(new_state_dict) -# else: -# model.load_state_dict(new_state_dict) -# logger.info("Loaded checkpoint '{}' (epoch {})" .format( -# checkpoint_path, iteration)) -# return model, optimizer, learning_rate, iteration -def load_checkpoint(checkpoint_path, model, optimizer=None, load_opt=1): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location="cpu") - - saved_state_dict = checkpoint_dict["model"] - if hasattr(model, "module"): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): # 模型需要的shape - try: - new_state_dict[k] = saved_state_dict[k] - if saved_state_dict[k].shape != state_dict[k].shape: - logger.warn( - "shape-%s-mismatch|need-%s|get-%s", - k, - state_dict[k].shape, - saved_state_dict[k].shape, - ) # - raise KeyError - except: - # logger.info(traceback.format_exc()) - logger.info("%s is not in the checkpoint", k) # pretrain缺失的 - new_state_dict[k] = v # 模型自带的随机值 - if hasattr(model, "module"): - model.module.load_state_dict(new_state_dict, strict=False) - else: - model.load_state_dict(new_state_dict, strict=False) - logger.info("Loaded model weights") - - iteration = checkpoint_dict["iteration"] - learning_rate = checkpoint_dict["learning_rate"] - if ( - optimizer is not None and load_opt == 1 - ): ###加载不了,如果是空的的话,重新初始化,可能还会影响lr时间表的更新,因此在train文件最外围catch - # try: - optimizer.load_state_dict(checkpoint_dict["optimizer"]) - # except: - # traceback.print_exc() - logger.info("Loaded checkpoint '{}' (epoch {})".format(checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info( - "Saving model and optimizer state at epoch {} to {}".format( - iteration, checkpoint_path - ) - ) - if hasattr(model, "module"): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save( - { - "model": state_dict, - "iteration": iteration, - "optimizer": optimizer.state_dict(), - "learning_rate": learning_rate, - }, - checkpoint_path, - ) - - -def save_checkpoint_d(combd, sbd, optimizer, learning_rate, iteration, checkpoint_path): - logger.info( - "Saving model and optimizer state at epoch {} to {}".format( - iteration, checkpoint_path - ) - ) - if hasattr(combd, "module"): - state_dict_combd = combd.module.state_dict() - else: - state_dict_combd = combd.state_dict() - if hasattr(sbd, "module"): - state_dict_sbd = sbd.module.state_dict() - else: - state_dict_sbd = sbd.state_dict() - torch.save( - { - "combd": state_dict_combd, - "sbd": state_dict_sbd, - "iteration": iteration, - "optimizer": optimizer.state_dict(), - "learning_rate": learning_rate, - }, - checkpoint_path, - ) - - -def summarize( - writer, - global_step, - scalars={}, - histograms={}, - images={}, - audios={}, - audio_sampling_rate=22050, -): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats="HWC") - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - logger.debug(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger("matplotlib") - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", interpolation="none") - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="") - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger("matplotlib") - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow( - alignment.transpose(), aspect="auto", origin="lower", interpolation="none" - ) - fig.colorbar(im, ax=ax) - xlabel = "Decoder timestep" - if info is not None: - xlabel += "\n\n" + info - plt.xlabel(xlabel) - plt.ylabel("Encoder timestep") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="") - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding="utf-8") as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - """ - todo: - 结尾七人组: - 保存频率、总epoch done - bs done - pretrainG、pretrainD done - 卡号:os.en["CUDA_VISIBLE_DEVICES"] done - if_latest done - 模型:if_f0 done - 采样率:自动选择config done - 是否缓存数据集进GPU:if_cache_data_in_gpu done - - -m: - 自动决定training_files路径,改掉train_nsf_load_pretrain.py里的hps.data.training_files done - -c不要了 - """ - parser = argparse.ArgumentParser() - parser.add_argument( - "-se", - "--save_every_epoch", - type=int, - required=True, - help="checkpoint save frequency (epoch)", - ) - parser.add_argument( - "-te", "--total_epoch", type=int, required=True, help="total_epoch" - ) - parser.add_argument( - "-pg", "--pretrainG", type=str, default="", help="Pretrained Discriminator path" - ) - parser.add_argument( - "-pd", "--pretrainD", type=str, default="", help="Pretrained Generator path" - ) - parser.add_argument("-g", "--gpus", type=str, default="0", help="split by -") - parser.add_argument( - "-bs", "--batch_size", type=int, required=True, help="batch size" - ) - parser.add_argument( - "-e", "--experiment_dir", type=str, required=True, help="experiment dir" - ) # -m - parser.add_argument( - "-sr", "--sample_rate", type=str, required=True, help="sample rate, 32k/40k/48k" - ) - parser.add_argument( - "-sw", - "--save_every_weights", - type=str, - default="0", - help="save the extracted model in weights directory when saving checkpoints", - ) - parser.add_argument( - "-v", "--version", type=str, required=True, help="model version" - ) - parser.add_argument( - "-f0", - "--if_f0", - type=int, - required=True, - help="use f0 as one of the inputs of the model, 1 or 0", - ) - parser.add_argument( - "-l", - "--if_latest", - type=int, - required=True, - help="if only save the latest G/D pth file, 1 or 0", - ) - parser.add_argument( - "-c", - "--if_cache_data_in_gpu", - type=int, - required=True, - help="if caching the dataset in GPU memory, 1 or 0", - ) - - args = parser.parse_args() - name = args.experiment_dir - experiment_dir = os.path.join("./logs", args.experiment_dir) - - config_save_path = os.path.join(experiment_dir, "config.json") - with open(config_save_path, "r") as f: - config = json.load(f) - - hparams = HParams(**config) - hparams.model_dir = hparams.experiment_dir = experiment_dir - hparams.save_every_epoch = args.save_every_epoch - hparams.name = name - hparams.total_epoch = args.total_epoch - hparams.pretrainG = args.pretrainG - hparams.pretrainD = args.pretrainD - hparams.version = args.version - hparams.gpus = args.gpus - hparams.train.batch_size = args.batch_size - hparams.sample_rate = args.sample_rate - hparams.if_f0 = args.if_f0 - hparams.if_latest = args.if_latest - hparams.save_every_weights = args.save_every_weights - hparams.if_cache_data_in_gpu = args.if_cache_data_in_gpu - hparams.data.training_files = "%s/filelist.txt" % experiment_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn( - "{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - ) - ) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn( - "git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8] - ) - ) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams: - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/Sky5408er/vits-uma-genshin-honkai/commons.py b/spaces/Sky5408er/vits-uma-genshin-honkai/commons.py deleted file mode 100644 index 40fcc05364d4815971f5c6f9dbb8dcef8e3ec1e9..0000000000000000000000000000000000000000 --- a/spaces/Sky5408er/vits-uma-genshin-honkai/commons.py +++ /dev/null @@ -1,172 +0,0 @@ -import math -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/StarCore/PaddleOCR/README.md b/spaces/StarCore/PaddleOCR/README.md deleted file mode 100644 index ee94dfe80795148e95618b395adc5b1f5d699d78..0000000000000000000000000000000000000000 --- a/spaces/StarCore/PaddleOCR/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: PaddleOCR -emoji: 😻 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/StarFox7/Llama-2-ko-7B-chat-ggml/run-app.sh b/spaces/StarFox7/Llama-2-ko-7B-chat-ggml/run-app.sh deleted file mode 100644 index cc559b4b3edfbefa8e7b5d8d0f233065da316cf8..0000000000000000000000000000000000000000 --- a/spaces/StarFox7/Llama-2-ko-7B-chat-ggml/run-app.sh +++ /dev/null @@ -1 +0,0 @@ -nodemon -w app.py -x python app.py \ No newline at end of file diff --git a/spaces/SuYuanS/AudioCraft_Plus/tests/data/test_audio_dataset.py b/spaces/SuYuanS/AudioCraft_Plus/tests/data/test_audio_dataset.py deleted file mode 100644 index b591ea6137f48d0d97fcd1243c5f5d258670a474..0000000000000000000000000000000000000000 --- a/spaces/SuYuanS/AudioCraft_Plus/tests/data/test_audio_dataset.py +++ /dev/null @@ -1,352 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from functools import partial -from itertools import product -import json -import math -import os -import random -import typing as tp - -import pytest -import torch -from torch.utils.data import DataLoader - -from audiocraft.data.audio_dataset import ( - AudioDataset, - AudioMeta, - _get_audio_meta, - load_audio_meta, - save_audio_meta -) -from audiocraft.data.zip import PathInZip - -from ..common_utils import TempDirMixin, get_white_noise, save_wav - - -class TestAudioMeta(TempDirMixin): - - def test_get_audio_meta(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(duration * sample_rate) - wav = get_white_noise(ch, n_frames) - path = self.get_temp_path('sample.wav') - save_wav(path, wav, sample_rate) - m = _get_audio_meta(path, minimal=True) - assert m.path == path, 'path does not match' - assert m.sample_rate == sample_rate, 'sample rate does not match' - assert m.duration == duration, 'duration does not match' - assert m.amplitude is None - assert m.info_path is None - - def test_save_audio_meta(self): - audio_meta = [ - AudioMeta("mypath1", 1., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file1.json')), - AudioMeta("mypath2", 2., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file2.json')) - ] - empty_audio_meta = [] - for idx, meta in enumerate([audio_meta, empty_audio_meta]): - path = self.get_temp_path(f'data_{idx}_save.jsonl') - save_audio_meta(path, meta) - with open(path, 'r') as f: - lines = f.readlines() - read_meta = [AudioMeta.from_dict(json.loads(line)) for line in lines] - assert len(read_meta) == len(meta) - for m, read_m in zip(meta, read_meta): - assert m == read_m - - def test_load_audio_meta(self): - try: - import dora - except ImportError: - dora = None # type: ignore - - audio_meta = [ - AudioMeta("mypath1", 1., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file1.json')), - AudioMeta("mypath2", 2., 16_000, None, None, PathInZip('/foo/bar.zip:/relative/file2.json')) - ] - empty_meta = [] - for idx, meta in enumerate([audio_meta, empty_meta]): - path = self.get_temp_path(f'data_{idx}_load.jsonl') - with open(path, 'w') as f: - for m in meta: - json_str = json.dumps(m.to_dict()) + '\n' - f.write(json_str) - read_meta = load_audio_meta(path) - assert len(read_meta) == len(meta) - for m, read_m in zip(meta, read_meta): - if dora: - m.path = dora.git_save.to_absolute_path(m.path) - assert m == read_m, f'original={m}, read={read_m}' - - -class TestAudioDataset(TempDirMixin): - - def _create_audio_files(self, - root_name: str, - num_examples: int, - durations: tp.Union[float, tp.Tuple[float, float]] = (0.1, 1.), - sample_rate: int = 16_000, - channels: int = 1): - root_dir = self.get_temp_dir(root_name) - for i in range(num_examples): - if isinstance(durations, float): - duration = durations - elif isinstance(durations, tuple) and len(durations) == 1: - duration = durations[0] - elif isinstance(durations, tuple) and len(durations) == 2: - duration = random.uniform(durations[0], durations[1]) - else: - assert False - n_frames = int(duration * sample_rate) - wav = get_white_noise(channels, n_frames) - path = os.path.join(root_dir, f'example_{i}.wav') - save_wav(path, wav, sample_rate) - return root_dir - - def _create_audio_dataset(self, - root_name: str, - total_num_examples: int, - durations: tp.Union[float, tp.Tuple[float, float]] = (0.1, 1.), - sample_rate: int = 16_000, - channels: int = 1, - segment_duration: tp.Optional[float] = None, - num_examples: int = 10, - shuffle: bool = True, - return_info: bool = False): - root_dir = self._create_audio_files(root_name, total_num_examples, durations, sample_rate, channels) - dataset = AudioDataset.from_path(root_dir, - minimal_meta=True, - segment_duration=segment_duration, - num_samples=num_examples, - sample_rate=sample_rate, - channels=channels, - shuffle=shuffle, - return_info=return_info) - return dataset - - def test_dataset_full(self): - total_examples = 10 - min_duration, max_duration = 1., 4. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), - sample_rate=sample_rate, channels=channels, segment_duration=None) - assert len(dataset) == total_examples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] <= int(max_duration * sample_rate) - assert sample.shape[1] >= int(min_duration * sample_rate) - - def test_dataset_segment(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples) - assert len(dataset) == num_samples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == int(segment_duration * sample_rate) - - def test_dataset_equal_audio_and_segment_durations(self): - total_examples = 1 - num_samples = 2 - audio_duration = 1. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=audio_duration, sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples) - assert len(dataset) == num_samples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == int(segment_duration * sample_rate) - # the random seek_time adds variability on audio read - sample_1 = dataset[0] - sample_2 = dataset[1] - assert not torch.allclose(sample_1, sample_2) - - def test_dataset_samples(self): - total_examples = 1 - num_samples = 2 - audio_duration = 1. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - - create_dataset = partial( - self._create_audio_dataset, - 'dset', total_examples, durations=audio_duration, sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, - ) - - dataset = create_dataset(shuffle=True) - # when shuffle = True, we have different inputs for the same index across epoch - sample_1 = dataset[0] - sample_2 = dataset[0] - assert not torch.allclose(sample_1, sample_2) - - dataset_noshuffle = create_dataset(shuffle=False) - # when shuffle = False, we have same inputs for the same index across epoch - sample_1 = dataset_noshuffle[0] - sample_2 = dataset_noshuffle[0] - assert torch.allclose(sample_1, sample_2) - - def test_dataset_return_info(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True) - assert len(dataset) == num_samples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample, segment_info = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == int(segment_duration * sample_rate) - assert segment_info.sample_rate == sample_rate - assert segment_info.total_frames == int(segment_duration * sample_rate) - assert segment_info.n_frames <= int(segment_duration * sample_rate) - assert segment_info.seek_time >= 0 - - def test_dataset_return_info_no_segment_duration(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = None - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True) - assert len(dataset) == total_examples - assert dataset.sample_rate == sample_rate - assert dataset.channels == channels - for idx in range(len(dataset)): - sample, segment_info = dataset[idx] - assert sample.shape[0] == channels - assert sample.shape[1] == segment_info.total_frames - assert segment_info.sample_rate == sample_rate - assert segment_info.n_frames <= segment_info.total_frames - - def test_dataset_collate_fn(self): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=False) - batch_size = 4 - dataloader = DataLoader( - dataset, - batch_size=batch_size, - num_workers=0 - ) - for idx, batch in enumerate(dataloader): - assert batch.shape[0] == batch_size - - @pytest.mark.parametrize("segment_duration", [1.0, None]) - def test_dataset_with_meta_collate_fn(self, segment_duration): - total_examples = 10 - num_samples = 20 - min_duration, max_duration = 1., 4. - segment_duration = 1. - sample_rate = 16_000 - channels = 1 - dataset = self._create_audio_dataset( - 'dset', total_examples, durations=(min_duration, max_duration), sample_rate=sample_rate, - channels=channels, segment_duration=segment_duration, num_examples=num_samples, return_info=True) - batch_size = 4 - dataloader = DataLoader( - dataset, - batch_size=batch_size, - collate_fn=dataset.collater, - num_workers=0 - ) - for idx, batch in enumerate(dataloader): - wav, infos = batch - assert wav.shape[0] == batch_size - assert len(infos) == batch_size - - @pytest.mark.parametrize("segment_duration,sample_on_weight,sample_on_duration,a_hist,b_hist,c_hist", [ - [1, True, True, 0.5, 0.5, 0.0], - [1, False, True, 0.25, 0.5, 0.25], - [1, True, False, 0.666, 0.333, 0.0], - [1, False, False, 0.333, 0.333, 0.333], - [None, False, False, 0.333, 0.333, 0.333]]) - def test_sample_with_weight(self, segment_duration, sample_on_weight, sample_on_duration, a_hist, b_hist, c_hist): - random.seed(1234) - rng = torch.Generator() - rng.manual_seed(1234) - - def _get_histogram(dataset, repetitions=20_000): - counts = {file_meta.path: 0. for file_meta in meta} - for _ in range(repetitions): - file_meta = dataset.sample_file(0, rng) - counts[file_meta.path] += 1 - return {name: count / repetitions for name, count in counts.items()} - - meta = [ - AudioMeta(path='a', duration=5, sample_rate=1, weight=2), - AudioMeta(path='b', duration=10, sample_rate=1, weight=None), - AudioMeta(path='c', duration=5, sample_rate=1, weight=0), - ] - dataset = AudioDataset( - meta, segment_duration=segment_duration, sample_on_weight=sample_on_weight, - sample_on_duration=sample_on_duration) - hist = _get_histogram(dataset) - assert math.isclose(hist['a'], a_hist, abs_tol=0.01) - assert math.isclose(hist['b'], b_hist, abs_tol=0.01) - assert math.isclose(hist['c'], c_hist, abs_tol=0.01) - - def test_meta_duration_filter_all(self): - meta = [ - AudioMeta(path='a', duration=5, sample_rate=1, weight=2), - AudioMeta(path='b', duration=10, sample_rate=1, weight=None), - AudioMeta(path='c', duration=5, sample_rate=1, weight=0), - ] - try: - AudioDataset(meta, segment_duration=11, min_segment_ratio=1) - assert False - except AssertionError: - assert True - - def test_meta_duration_filter_long(self): - meta = [ - AudioMeta(path='a', duration=5, sample_rate=1, weight=2), - AudioMeta(path='b', duration=10, sample_rate=1, weight=None), - AudioMeta(path='c', duration=5, sample_rate=1, weight=0), - ] - dataset = AudioDataset(meta, segment_duration=None, min_segment_ratio=1, max_audio_duration=7) - assert len(dataset) == 2 diff --git a/spaces/Sumit7864/Image-Enhancer/realesrgan/train.py b/spaces/Sumit7864/Image-Enhancer/realesrgan/train.py deleted file mode 100644 index 8a9cec9ed80d9f362984779548dcec921a636a04..0000000000000000000000000000000000000000 --- a/spaces/Sumit7864/Image-Enhancer/realesrgan/train.py +++ /dev/null @@ -1,11 +0,0 @@ -# flake8: noqa -import os.path as osp -from basicsr.train import train_pipeline - -import realesrgan.archs -import realesrgan.data -import realesrgan.models - -if __name__ == '__main__': - root_path = osp.abspath(osp.join(__file__, osp.pardir, osp.pardir)) - train_pipeline(root_path) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/ImageSequence.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/ImageSequence.py deleted file mode 100644 index c4bb6334acfde7d245c5bb1722b7c2381661e4ca..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/ImageSequence.py +++ /dev/null @@ -1,76 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# sequence support classes -# -# history: -# 1997-02-20 fl Created -# -# Copyright (c) 1997 by Secret Labs AB. -# Copyright (c) 1997 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - -## - - -class Iterator: - """ - This class implements an iterator object that can be used to loop - over an image sequence. - - You can use the ``[]`` operator to access elements by index. This operator - will raise an :py:exc:`IndexError` if you try to access a nonexistent - frame. - - :param im: An image object. - """ - - def __init__(self, im): - if not hasattr(im, "seek"): - msg = "im must have seek method" - raise AttributeError(msg) - self.im = im - self.position = getattr(self.im, "_min_frame", 0) - - def __getitem__(self, ix): - try: - self.im.seek(ix) - return self.im - except EOFError as e: - raise IndexError from e # end of sequence - - def __iter__(self): - return self - - def __next__(self): - try: - self.im.seek(self.position) - self.position += 1 - return self.im - except EOFError as e: - raise StopIteration from e - - -def all_frames(im, func=None): - """ - Applies a given function to all frames in an image or a list of images. - The frames are returned as a list of separate images. - - :param im: An image, or a list of images. - :param func: The function to apply to all of the image frames. - :returns: A list of images. - """ - if not isinstance(im, list): - im = [im] - - ims = [] - for imSequence in im: - current = imSequence.tell() - - ims += [im_frame.copy() for im_frame in Iterator(imSequence)] - - imSequence.seek(current) - return [func(im) for im in ims] if func else ims diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/ImageStat.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/ImageStat.py deleted file mode 100644 index b7ebddf066ab6eb115a79d6bc34e31ab0c1569bd..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/ImageStat.py +++ /dev/null @@ -1,148 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# global image statistics -# -# History: -# 1996-04-05 fl Created -# 1997-05-21 fl Added mask; added rms, var, stddev attributes -# 1997-08-05 fl Added median -# 1998-07-05 hk Fixed integer overflow error -# -# Notes: -# This class shows how to implement delayed evaluation of attributes. -# To get a certain value, simply access the corresponding attribute. -# The __getattr__ dispatcher takes care of the rest. -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1996-97. -# -# See the README file for information on usage and redistribution. -# - -import functools -import math -import operator - - -class Stat: - def __init__(self, image_or_list, mask=None): - try: - if mask: - self.h = image_or_list.histogram(mask) - else: - self.h = image_or_list.histogram() - except AttributeError: - self.h = image_or_list # assume it to be a histogram list - if not isinstance(self.h, list): - msg = "first argument must be image or list" - raise TypeError(msg) - self.bands = list(range(len(self.h) // 256)) - - def __getattr__(self, id): - """Calculate missing attribute""" - if id[:4] == "_get": - raise AttributeError(id) - # calculate missing attribute - v = getattr(self, "_get" + id)() - setattr(self, id, v) - return v - - def _getextrema(self): - """Get min/max values for each band in the image""" - - def minmax(histogram): - n = 255 - x = 0 - for i in range(256): - if histogram[i]: - n = min(n, i) - x = max(x, i) - return n, x # returns (255, 0) if there's no data in the histogram - - v = [] - for i in range(0, len(self.h), 256): - v.append(minmax(self.h[i:])) - return v - - def _getcount(self): - """Get total number of pixels in each layer""" - - v = [] - for i in range(0, len(self.h), 256): - v.append(functools.reduce(operator.add, self.h[i : i + 256])) - return v - - def _getsum(self): - """Get sum of all pixels in each layer""" - - v = [] - for i in range(0, len(self.h), 256): - layer_sum = 0.0 - for j in range(256): - layer_sum += j * self.h[i + j] - v.append(layer_sum) - return v - - def _getsum2(self): - """Get squared sum of all pixels in each layer""" - - v = [] - for i in range(0, len(self.h), 256): - sum2 = 0.0 - for j in range(256): - sum2 += (j**2) * float(self.h[i + j]) - v.append(sum2) - return v - - def _getmean(self): - """Get average pixel level for each layer""" - - v = [] - for i in self.bands: - v.append(self.sum[i] / self.count[i]) - return v - - def _getmedian(self): - """Get median pixel level for each layer""" - - v = [] - for i in self.bands: - s = 0 - half = self.count[i] // 2 - b = i * 256 - for j in range(256): - s = s + self.h[b + j] - if s > half: - break - v.append(j) - return v - - def _getrms(self): - """Get RMS for each layer""" - - v = [] - for i in self.bands: - v.append(math.sqrt(self.sum2[i] / self.count[i])) - return v - - def _getvar(self): - """Get variance for each layer""" - - v = [] - for i in self.bands: - n = self.count[i] - v.append((self.sum2[i] - (self.sum[i] ** 2.0) / n) / n) - return v - - def _getstddev(self): - """Get standard deviation for each layer""" - - v = [] - for i in self.bands: - v.append(math.sqrt(self.var[i])) - return v - - -Global = Stat # compatibility diff --git a/spaces/Suniilkumaar/SwapMukham/upscaler/RealESRGAN/utils.py b/spaces/Suniilkumaar/SwapMukham/upscaler/RealESRGAN/utils.py deleted file mode 100644 index 3e7cfd184c53e60cbb98234c24534076c6b94378..0000000000000000000000000000000000000000 --- a/spaces/Suniilkumaar/SwapMukham/upscaler/RealESRGAN/utils.py +++ /dev/null @@ -1,133 +0,0 @@ -import numpy as np -import torch -from PIL import Image -import os -import io - -def pad_reflect(image, pad_size): - imsize = image.shape - height, width = imsize[:2] - new_img = np.zeros([height+pad_size*2, width+pad_size*2, imsize[2]]).astype(np.uint8) - new_img[pad_size:-pad_size, pad_size:-pad_size, :] = image - - new_img[0:pad_size, pad_size:-pad_size, :] = np.flip(image[0:pad_size, :, :], axis=0) #top - new_img[-pad_size:, pad_size:-pad_size, :] = np.flip(image[-pad_size:, :, :], axis=0) #bottom - new_img[:, 0:pad_size, :] = np.flip(new_img[:, pad_size:pad_size*2, :], axis=1) #left - new_img[:, -pad_size:, :] = np.flip(new_img[:, -pad_size*2:-pad_size, :], axis=1) #right - - return new_img - -def unpad_image(image, pad_size): - return image[pad_size:-pad_size, pad_size:-pad_size, :] - - -def process_array(image_array, expand=True): - """ Process a 3-dimensional array into a scaled, 4 dimensional batch of size 1. """ - - image_batch = image_array / 255.0 - if expand: - image_batch = np.expand_dims(image_batch, axis=0) - return image_batch - - -def process_output(output_tensor): - """ Transforms the 4-dimensional output tensor into a suitable image format. """ - - sr_img = output_tensor.clip(0, 1) * 255 - sr_img = np.uint8(sr_img) - return sr_img - - -def pad_patch(image_patch, padding_size, channel_last=True): - """ Pads image_patch with with padding_size edge values. """ - - if channel_last: - return np.pad( - image_patch, - ((padding_size, padding_size), (padding_size, padding_size), (0, 0)), - 'edge', - ) - else: - return np.pad( - image_patch, - ((0, 0), (padding_size, padding_size), (padding_size, padding_size)), - 'edge', - ) - - -def unpad_patches(image_patches, padding_size): - return image_patches[:, padding_size:-padding_size, padding_size:-padding_size, :] - - -def split_image_into_overlapping_patches(image_array, patch_size, padding_size=2): - """ Splits the image into partially overlapping patches. - The patches overlap by padding_size pixels. - Pads the image twice: - - first to have a size multiple of the patch size, - - then to have equal padding at the borders. - Args: - image_array: numpy array of the input image. - patch_size: size of the patches from the original image (without padding). - padding_size: size of the overlapping area. - """ - - xmax, ymax, _ = image_array.shape - x_remainder = xmax % patch_size - y_remainder = ymax % patch_size - - # modulo here is to avoid extending of patch_size instead of 0 - x_extend = (patch_size - x_remainder) % patch_size - y_extend = (patch_size - y_remainder) % patch_size - - # make sure the image is divisible into regular patches - extended_image = np.pad(image_array, ((0, x_extend), (0, y_extend), (0, 0)), 'edge') - - # add padding around the image to simplify computations - padded_image = pad_patch(extended_image, padding_size, channel_last=True) - - xmax, ymax, _ = padded_image.shape - patches = [] - - x_lefts = range(padding_size, xmax - padding_size, patch_size) - y_tops = range(padding_size, ymax - padding_size, patch_size) - - for x in x_lefts: - for y in y_tops: - x_left = x - padding_size - y_top = y - padding_size - x_right = x + patch_size + padding_size - y_bottom = y + patch_size + padding_size - patch = padded_image[x_left:x_right, y_top:y_bottom, :] - patches.append(patch) - - return np.array(patches), padded_image.shape - - -def stich_together(patches, padded_image_shape, target_shape, padding_size=4): - """ Reconstruct the image from overlapping patches. - After scaling, shapes and padding should be scaled too. - Args: - patches: patches obtained with split_image_into_overlapping_patches - padded_image_shape: shape of the padded image contructed in split_image_into_overlapping_patches - target_shape: shape of the final image - padding_size: size of the overlapping area. - """ - - xmax, ymax, _ = padded_image_shape - patches = unpad_patches(patches, padding_size) - patch_size = patches.shape[1] - n_patches_per_row = ymax // patch_size - - complete_image = np.zeros((xmax, ymax, 3)) - - row = -1 - col = 0 - for i in range(len(patches)): - if i % n_patches_per_row == 0: - row += 1 - col = 0 - complete_image[ - row * patch_size: (row + 1) * patch_size, col * patch_size: (col + 1) * patch_size,: - ] = patches[i] - col += 1 - return complete_image[0: target_shape[0], 0: target_shape[1], :] \ No newline at end of file diff --git a/spaces/Superlang/ImageProcessor/app.py b/spaces/Superlang/ImageProcessor/app.py deleted file mode 100644 index 1a3ca063bb0660fd8baf26c9a36ade79b0c1b891..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/app.py +++ /dev/null @@ -1,60 +0,0 @@ -import importlib -import os - -import gradio as gr -from omegaconf import OmegaConf - -from annotator.util import resize_image, HWC3 - -config = OmegaConf.load("config/annotator.yaml") - -package_annotator = "annotator" - - -def process_image(cls: str, img, res, *kwargs): - img = resize_image(HWC3(img), res) - # load_model() - module_imp = importlib.import_module(package_annotator) - model = getattr(module_imp, cls) - image_processor = model() - result = image_processor(img, *kwargs) - if type(result) == tuple: - return result - return [result] - - -def process(cls): - def process_fc(img, res, *args): - return process_image(cls, img, res, *args) - - return process_fc - - -block = gr.Blocks().queue() -examples = [os.path.join(os.path.dirname(__file__), "examples/demo.jpeg")] -with block: - for key in config.keys(): - cls, input_element = config[key]["process"], config[key].get("input") - input_append = [] - with gr.Tab(key): - with gr.Row(): - gr.Markdown("## " + key) - with gr.Row(): - with gr.Column(): - input_image = gr.Image(source='upload', type="numpy") - resolution = gr.Slider(label="resolution", minimum=256, maximum=1024, value=512, step=64) - - if input_element is not None: - for item in input_element: - input_append.append(getattr(gr, item["attr"])(**item["args"])) - - run_button = gr.Button(label="Run") - gr.Examples(examples, input_image) - with gr.Column(): - gallery = gr.Gallery(label="Generated images", show_label=False).style(height="auto") - - run_button.click(fn=process(cls), - inputs=[input_image, resolution] + input_append, - outputs=[gallery]) - -block.launch() diff --git a/spaces/Swying/text_generator/README.md b/spaces/Swying/text_generator/README.md deleted file mode 100644 index a571d291205e1509e2b5fd4f848e970ba8b13549..0000000000000000000000000000000000000000 --- a/spaces/Swying/text_generator/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Text Generator -emoji: 🐨 -colorFrom: yellow -colorTo: purple -sdk: gradio -sdk_version: 3.11.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Sygil/INE-dataset-explorer/README.md b/spaces/Sygil/INE-dataset-explorer/README.md deleted file mode 100644 index 901d834ec82e4b7f4ee080b05ebc76790fc6c3d1..0000000000000000000000000000000000000000 --- a/spaces/Sygil/INE-dataset-explorer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -license: openrail -title: INE-dataset-explorer -pinned: true -sdk: docker ---- - -Imaginary Network Expanded Dataset explorer. - -Data source: https://github.com/Sygil-Dev/INE-dataset - - -Made by [Kevin Turner (keturn)](https://huggingface.co/keturn) and powered by [Datasette](https://datasette.io/). \ No newline at end of file diff --git a/spaces/TEnngal/bingo/src/components/external-link.tsx b/spaces/TEnngal/bingo/src/components/external-link.tsx deleted file mode 100644 index 011265f364d5a64a770f4c7e9c65c5ade21d623a..0000000000000000000000000000000000000000 --- a/spaces/TEnngal/bingo/src/components/external-link.tsx +++ /dev/null @@ -1,30 +0,0 @@ -export function ExternalLink({ - href, - children -}: { - href: string - children: React.ReactNode -}) { - return ( - <a - href={href} - target="_blank" - rel="noreferrer" - className="inline-flex flex-1 justify-center gap-1 underline" - > - <span>{children}</span> - <svg - aria-hidden="true" - height="7" - viewBox="0 0 6 6" - width="7" - className="opacity-70" - > - <path - d="M1.25215 5.54731L0.622742 4.9179L3.78169 1.75597H1.3834L1.38936 0.890915H5.27615V4.78069H4.40513L4.41109 2.38538L1.25215 5.54731Z" - fill="currentColor" - ></path> - </svg> - </a> - ) -} diff --git a/spaces/TEnngal/bingo/src/pages/api/kblob.ts b/spaces/TEnngal/bingo/src/pages/api/kblob.ts deleted file mode 100644 index 0ce7e6063cdc06838e76f1cff1d5982d34ef52de..0000000000000000000000000000000000000000 --- a/spaces/TEnngal/bingo/src/pages/api/kblob.ts +++ /dev/null @@ -1,56 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import FormData from 'form-data' -import { fetch } from '@/lib/isomorphic' -import { KBlobRequest } from '@/lib/bots/bing/types' - -const API_DOMAIN = 'https://bing.vcanbb.top' - -export const config = { - api: { - bodyParser: { - sizeLimit: '10mb' // Set desired value here - } - } -} - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - try { - const { knowledgeRequest, imageBase64 } = req.body as KBlobRequest - - const formData = new FormData() - formData.append('knowledgeRequest', JSON.stringify(knowledgeRequest)) - if (imageBase64) { - formData.append('imageBase64', imageBase64) - } - - const response = await fetch(`${API_DOMAIN}/images/kblob`, - { - method: 'POST', - body: formData.getBuffer(), - headers: { - "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "Referer": `${API_DOMAIN}/web/index.html`, - "Referrer-Policy": "origin-when-cross-origin", - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - ...formData.getHeaders() - } - } - ).then(res => res.text()) - - res.writeHead(200, { - 'Content-Type': 'application/json', - }) - res.end(response || JSON.stringify({ result: { value: 'UploadFailed', message: '请更换 IP 或代理后重试' } })) - } catch (e) { - return res.json({ - result: { - value: 'UploadFailed', - message: `${e}` - } - }) - } -} diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/bifpn_fcos.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/bifpn_fcos.py deleted file mode 100644 index 17f2904ccad484f380b64efc668b9090d047d15e..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/bifpn_fcos.py +++ /dev/null @@ -1,469 +0,0 @@ -# This file is modified from https://github.com/aim-uofa/AdelaiDet/blob/master/adet/modeling/backbone/bifpn.py -# The original file is under 2-clause BSD License for academic use, and *non-commercial use*. -import torch -import torch.nn.functional as F -from torch import nn - -from detectron2.layers import Conv2d, ShapeSpec, get_norm - -from detectron2.modeling.backbone import Backbone, build_resnet_backbone -from detectron2.modeling import BACKBONE_REGISTRY -from .dlafpn import dla34 - -__all__ = [] - - -def swish(x): - return x * x.sigmoid() - - -def split_name(name): - for i, c in enumerate(name): - if not c.isalpha(): - return name[:i], int(name[i:]) - raise ValueError() - - -class FeatureMapResampler(nn.Module): - def __init__(self, in_channels, out_channels, stride, norm=""): - super(FeatureMapResampler, self).__init__() - if in_channels != out_channels: - self.reduction = Conv2d( - in_channels, out_channels, kernel_size=1, - bias=(norm == ""), - norm=get_norm(norm, out_channels), - activation=None - ) - else: - self.reduction = None - - assert stride <= 2 - self.stride = stride - - def forward(self, x): - if self.reduction is not None: - x = self.reduction(x) - - if self.stride == 2: - x = F.max_pool2d( - x, kernel_size=self.stride + 1, - stride=self.stride, padding=1 - ) - elif self.stride == 1: - pass - else: - raise NotImplementedError() - return x - - -class BackboneWithTopLevels(Backbone): - def __init__(self, backbone, out_channels, num_top_levels, norm=""): - super(BackboneWithTopLevels, self).__init__() - self.backbone = backbone - backbone_output_shape = backbone.output_shape() - - self._out_feature_channels = {name: shape.channels for name, shape in backbone_output_shape.items()} - self._out_feature_strides = {name: shape.stride for name, shape in backbone_output_shape.items()} - self._out_features = list(self._out_feature_strides.keys()) - - last_feature_name = max(self._out_feature_strides.keys(), key=lambda x: split_name(x)[1]) - self.last_feature_name = last_feature_name - self.num_top_levels = num_top_levels - - last_channels = self._out_feature_channels[last_feature_name] - last_stride = self._out_feature_strides[last_feature_name] - - prefix, suffix = split_name(last_feature_name) - prev_channels = last_channels - for i in range(num_top_levels): - name = prefix + str(suffix + i + 1) - self.add_module(name, FeatureMapResampler( - prev_channels, out_channels, 2, norm - )) - prev_channels = out_channels - - self._out_feature_channels[name] = out_channels - self._out_feature_strides[name] = last_stride * 2 ** (i + 1) - self._out_features.append(name) - - def forward(self, x): - outputs = self.backbone(x) - last_features = outputs[self.last_feature_name] - prefix, suffix = split_name(self.last_feature_name) - - x = last_features - for i in range(self.num_top_levels): - name = prefix + str(suffix + i + 1) - x = self.__getattr__(name)(x) - outputs[name] = x - - return outputs - - -class SingleBiFPN(Backbone): - """ - This module implements Feature Pyramid Network. - It creates pyramid features built on top of some input feature maps. - """ - - def __init__( - self, in_channels_list, out_channels, norm="" - ): - """ - Args: - bottom_up (Backbone): module representing the bottom up subnetwork. - Must be a subclass of :class:`Backbone`. The multi-scale feature - maps generated by the bottom up network, and listed in `in_features`, - are used to generate FPN levels. - in_features (list[str]): names of the input feature maps coming - from the backbone to which FPN is attached. For example, if the - backbone produces ["res2", "res3", "res4"], any *contiguous* sublist - of these may be used; order must be from high to low resolution. - out_channels (int): number of channels in the output feature maps. - norm (str): the normalization to use. - """ - super(SingleBiFPN, self).__init__() - - self.out_channels = out_channels - # build 5-levels bifpn - if len(in_channels_list) == 5: - self.nodes = [ - {'feat_level': 3, 'inputs_offsets': [3, 4]}, - {'feat_level': 2, 'inputs_offsets': [2, 5]}, - {'feat_level': 1, 'inputs_offsets': [1, 6]}, - {'feat_level': 0, 'inputs_offsets': [0, 7]}, - {'feat_level': 1, 'inputs_offsets': [1, 7, 8]}, - {'feat_level': 2, 'inputs_offsets': [2, 6, 9]}, - {'feat_level': 3, 'inputs_offsets': [3, 5, 10]}, - {'feat_level': 4, 'inputs_offsets': [4, 11]}, - ] - elif len(in_channels_list) == 3: - self.nodes = [ - {'feat_level': 1, 'inputs_offsets': [1, 2]}, - {'feat_level': 0, 'inputs_offsets': [0, 3]}, - {'feat_level': 1, 'inputs_offsets': [1, 3, 4]}, - {'feat_level': 2, 'inputs_offsets': [2, 5]}, - ] - else: - raise NotImplementedError - - node_info = [_ for _ in in_channels_list] - - num_output_connections = [0 for _ in in_channels_list] - for fnode in self.nodes: - feat_level = fnode["feat_level"] - inputs_offsets = fnode["inputs_offsets"] - inputs_offsets_str = "_".join(map(str, inputs_offsets)) - for input_offset in inputs_offsets: - num_output_connections[input_offset] += 1 - - in_channels = node_info[input_offset] - if in_channels != out_channels: - lateral_conv = Conv2d( - in_channels, - out_channels, - kernel_size=1, - norm=get_norm(norm, out_channels) - ) - self.add_module( - "lateral_{}_f{}".format(input_offset, feat_level), lateral_conv - ) - node_info.append(out_channels) - num_output_connections.append(0) - - # generate attention weights - name = "weights_f{}_{}".format(feat_level, inputs_offsets_str) - self.__setattr__(name, nn.Parameter( - torch.ones(len(inputs_offsets), dtype=torch.float32), - requires_grad=True - )) - - # generate convolutions after combination - name = "outputs_f{}_{}".format(feat_level, inputs_offsets_str) - self.add_module(name, Conv2d( - out_channels, - out_channels, - kernel_size=3, - padding=1, - norm=get_norm(norm, out_channels), - bias=(norm == "") - )) - - def forward(self, feats): - """ - Args: - input (dict[str->Tensor]): mapping feature map name (e.g., "p5") to - feature map tensor for each feature level in high to low resolution order. - Returns: - dict[str->Tensor]: - mapping from feature map name to FPN feature map tensor - in high to low resolution order. Returned feature names follow the FPN - paper convention: "p<stage>", where stage has stride = 2 ** stage e.g., - ["n2", "n3", ..., "n6"]. - """ - feats = [_ for _ in feats] - num_levels = len(feats) - num_output_connections = [0 for _ in feats] - for fnode in self.nodes: - feat_level = fnode["feat_level"] - inputs_offsets = fnode["inputs_offsets"] - inputs_offsets_str = "_".join(map(str, inputs_offsets)) - input_nodes = [] - _, _, target_h, target_w = feats[feat_level].size() - for input_offset in inputs_offsets: - num_output_connections[input_offset] += 1 - input_node = feats[input_offset] - - # reduction - if input_node.size(1) != self.out_channels: - name = "lateral_{}_f{}".format(input_offset, feat_level) - input_node = self.__getattr__(name)(input_node) - - # maybe downsample - _, _, h, w = input_node.size() - if h > target_h and w > target_w: - height_stride_size = int((h - 1) // target_h + 1) - width_stride_size = int((w - 1) // target_w + 1) - assert height_stride_size == width_stride_size == 2 - input_node = F.max_pool2d( - input_node, kernel_size=(height_stride_size + 1, width_stride_size + 1), - stride=(height_stride_size, width_stride_size), padding=1 - ) - elif h <= target_h and w <= target_w: - if h < target_h or w < target_w: - input_node = F.interpolate( - input_node, - size=(target_h, target_w), - mode="nearest" - ) - else: - raise NotImplementedError() - input_nodes.append(input_node) - - # attention - name = "weights_f{}_{}".format(feat_level, inputs_offsets_str) - weights = F.relu(self.__getattr__(name)) - norm_weights = weights / (weights.sum() + 0.0001) - - new_node = torch.stack(input_nodes, dim=-1) - new_node = (norm_weights * new_node).sum(dim=-1) - new_node = swish(new_node) - - name = "outputs_f{}_{}".format(feat_level, inputs_offsets_str) - feats.append(self.__getattr__(name)(new_node)) - - num_output_connections.append(0) - - output_feats = [] - for idx in range(num_levels): - for i, fnode in enumerate(reversed(self.nodes)): - if fnode['feat_level'] == idx: - output_feats.append(feats[-1 - i]) - break - else: - raise ValueError() - return output_feats - - -class BiFPN(Backbone): - """ - This module implements Feature Pyramid Network. - It creates pyramid features built on top of some input feature maps. - """ - - def __init__( - self, bottom_up, in_features, out_channels, num_top_levels, num_repeats, norm="" - ): - """ - Args: - bottom_up (Backbone): module representing the bottom up subnetwork. - Must be a subclass of :class:`Backbone`. The multi-scale feature - maps generated by the bottom up network, and listed in `in_features`, - are used to generate FPN levels. - in_features (list[str]): names of the input feature maps coming - from the backbone to which FPN is attached. For example, if the - backbone produces ["res2", "res3", "res4"], any *contiguous* sublist - of these may be used; order must be from high to low resolution. - out_channels (int): number of channels in the output feature maps. - num_top_levels (int): the number of the top levels (p6 or p7). - num_repeats (int): the number of repeats of BiFPN. - norm (str): the normalization to use. - """ - super(BiFPN, self).__init__() - assert isinstance(bottom_up, Backbone) - - # add extra feature levels (i.e., 6 and 7) - self.bottom_up = BackboneWithTopLevels( - bottom_up, out_channels, - num_top_levels, norm - ) - bottom_up_output_shapes = self.bottom_up.output_shape() - - in_features = sorted(in_features, key=lambda x: split_name(x)[1]) - self._size_divisibility = 128 #bottom_up_output_shapes[in_features[-1]].stride - self.out_channels = out_channels - self.min_level = split_name(in_features[0])[1] - - # add the names for top blocks - prefix, last_suffix = split_name(in_features[-1]) - for i in range(num_top_levels): - in_features.append(prefix + str(last_suffix + i + 1)) - self.in_features = in_features - - # generate output features - self._out_features = ["p{}".format(split_name(name)[1]) for name in in_features] - self._out_feature_strides = { - out_name: bottom_up_output_shapes[in_name].stride - for out_name, in_name in zip(self._out_features, in_features) - } - self._out_feature_channels = {k: out_channels for k in self._out_features} - - # build bifpn - self.repeated_bifpn = nn.ModuleList() - for i in range(num_repeats): - if i == 0: - in_channels_list = [ - bottom_up_output_shapes[name].channels for name in in_features - ] - else: - in_channels_list = [ - self._out_feature_channels[name] for name in self._out_features - ] - self.repeated_bifpn.append(SingleBiFPN( - in_channels_list, out_channels, norm - )) - - @property - def size_divisibility(self): - return self._size_divisibility - - def forward(self, x): - """ - Args: - input (dict[str->Tensor]): mapping feature map name (e.g., "p5") to - feature map tensor for each feature level in high to low resolution order. - Returns: - dict[str->Tensor]: - mapping from feature map name to FPN feature map tensor - in high to low resolution order. Returned feature names follow the FPN - paper convention: "p<stage>", where stage has stride = 2 ** stage e.g., - ["n2", "n3", ..., "n6"]. - """ - bottom_up_features = self.bottom_up(x) - feats = [bottom_up_features[f] for f in self.in_features] - - for bifpn in self.repeated_bifpn: - feats = bifpn(feats) - - return dict(zip(self._out_features, feats)) - - -def _assert_strides_are_log2_contiguous(strides): - """ - Assert that each stride is 2x times its preceding stride, i.e. "contiguous in log2". - """ - for i, stride in enumerate(strides[1:], 1): - assert stride == 2 * strides[i - 1], "Strides {} {} are not log2 contiguous".format( - stride, strides[i - 1] - ) - - -@BACKBONE_REGISTRY.register() -def build_fcos_resnet_bifpn_backbone(cfg, input_shape: ShapeSpec): - """ - Args: - cfg: a detectron2 CfgNode - Returns: - backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`. - """ - bottom_up = build_resnet_backbone(cfg, input_shape) - in_features = cfg.MODEL.FPN.IN_FEATURES - out_channels = cfg.MODEL.BIFPN.OUT_CHANNELS - num_repeats = cfg.MODEL.BIFPN.NUM_BIFPN - top_levels = 2 - - backbone = BiFPN( - bottom_up=bottom_up, - in_features=in_features, - out_channels=out_channels, - num_top_levels=top_levels, - num_repeats=num_repeats, - norm=cfg.MODEL.BIFPN.NORM - ) - return backbone - - - -@BACKBONE_REGISTRY.register() -def build_p35_fcos_resnet_bifpn_backbone(cfg, input_shape: ShapeSpec): - """ - Args: - cfg: a detectron2 CfgNode - Returns: - backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`. - """ - bottom_up = build_resnet_backbone(cfg, input_shape) - in_features = cfg.MODEL.FPN.IN_FEATURES - out_channels = cfg.MODEL.BIFPN.OUT_CHANNELS - num_repeats = cfg.MODEL.BIFPN.NUM_BIFPN - top_levels = 0 - - backbone = BiFPN( - bottom_up=bottom_up, - in_features=in_features, - out_channels=out_channels, - num_top_levels=top_levels, - num_repeats=num_repeats, - norm=cfg.MODEL.BIFPN.NORM - ) - return backbone - - -@BACKBONE_REGISTRY.register() -def build_p35_fcos_dla_bifpn_backbone(cfg, input_shape: ShapeSpec): - """ - Args: - cfg: a detectron2 CfgNode - Returns: - backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`. - """ - bottom_up = dla34(cfg) - in_features = cfg.MODEL.FPN.IN_FEATURES - out_channels = cfg.MODEL.BIFPN.OUT_CHANNELS - num_repeats = cfg.MODEL.BIFPN.NUM_BIFPN - top_levels = 0 - - backbone = BiFPN( - bottom_up=bottom_up, - in_features=in_features, - out_channels=out_channels, - num_top_levels=top_levels, - num_repeats=num_repeats, - norm=cfg.MODEL.BIFPN.NORM - ) - return backbone - -@BACKBONE_REGISTRY.register() -def build_p37_fcos_dla_bifpn_backbone(cfg, input_shape: ShapeSpec): - """ - Args: - cfg: a detectron2 CfgNode - Returns: - backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`. - """ - bottom_up = dla34(cfg) - in_features = cfg.MODEL.FPN.IN_FEATURES - out_channels = cfg.MODEL.BIFPN.OUT_CHANNELS - num_repeats = cfg.MODEL.BIFPN.NUM_BIFPN - assert cfg.MODEL.BIFPN.NUM_LEVELS == 5 - top_levels = 2 - - backbone = BiFPN( - bottom_up=bottom_up, - in_features=in_features, - out_channels=out_channels, - num_top_levels=top_levels, - num_repeats=num_repeats, - norm=cfg.MODEL.BIFPN.NORM - ) - return backbone \ No newline at end of file diff --git a/spaces/TheStinger/Ilaria_RVC/lib/infer_pack/attentions.py b/spaces/TheStinger/Ilaria_RVC/lib/infer_pack/attentions.py deleted file mode 100644 index 05501be1871643f78dddbeaa529c96667031a8db..0000000000000000000000000000000000000000 --- a/spaces/TheStinger/Ilaria_RVC/lib/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from lib.infer_pack import commons -from lib.infer_pack import modules -from lib.infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/ThirdEyeData/Next_Failure_Prediction/app.py b/spaces/ThirdEyeData/Next_Failure_Prediction/app.py deleted file mode 100644 index 7859340b890f3595c72a494ba9811837711e58d4..0000000000000000000000000000000000000000 --- a/spaces/ThirdEyeData/Next_Failure_Prediction/app.py +++ /dev/null @@ -1,160 +0,0 @@ -import pandas as pd -import numpy as np -import matplotlib.pyplot as plt -import seaborn as sns - -from datetime import datetime -from datetime import timedelta -from sklearn.model_selection import RandomizedSearchCV, GridSearchCV, train_test_split -from sklearn.ensemble import RandomForestRegressor -from sklearn.metrics import r2_score -from sklearn.preprocessing import LabelEncoder -from sklearn.preprocessing import StandardScaler -import streamlit as st - - -st.title("Next Failure Prediction") -# Loading Dataset -df1 = pd.read_csv(r'Final_Next_failure_Dataset.csv') - - -# replace values in the Manufacturer column with company names - -replace_dict1 = {1: 'ABC Company', 2: 'DEF Company', 3: 'GHI Company', 4: 'JKL Company', 5: 'XYZ Company'} -df1['Manufacturer'] = df1['Manufacturer'].replace(replace_dict1) - - -# replace values in the Last_Maintenance_Type column again - -replace_dict2 = {1: 'Corrective', 2: 'Preventive'} -df1['Last_Maintenance_Type'] = df1['Last_Maintenance_Type'].replace(replace_dict2) - -# replace values in the Prior_Maintenance column again - -replace_dict3 = {1: 'Irregular', 2: 'Regular'} -df1['Prior_Maintenance'] = df1['Prior_Maintenance'].replace(replace_dict3) - -# replace values in the Repair_Type column again - -replace_dict4 = {1: 'Hardware', 2: 'Software'} -df1['Repair_Type'] = df1['Repair_Type'].replace(replace_dict4) - -df = df1.copy() - -# For Manufacturer - -le_manu = LabelEncoder() -df['Manufacturer'] = le_manu.fit_transform(df['Manufacturer']) - - -# For Last_Maintenance_Type - -le_last = LabelEncoder() -df['Last_Maintenance_Type'] = le_last.fit_transform(df['Last_Maintenance_Type']) - -# For Prior_Maintenance - -le_prior = LabelEncoder() -df['Prior_Maintenance'] = le_prior.fit_transform(df['Prior_Maintenance']) - -# For Repair_Type - -le_repair = LabelEncoder() -df['Repair_Type'] = le_repair.fit_transform(df['Repair_Type']) - -#Splitting the data train ans test data -X = df.drop('Time_to_Failure_(hours)', axis = 1) -y = df['Time_to_Failure_(hours)'] - -X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state = 0) - -# Train Random Forest Regression model - -model = RandomForestRegressor(random_state = 0) -model.fit(X_train, y_train) - - -# Make predictions on train data - -y_pred_train = model.predict(X_train) - -# DATA from user -def user_report(): - manufacturer = st.sidebar.selectbox("Manufacturer", - ("JKL Company", "GHI Company","DEF Company","ABC Company","XYZ Company" )) - if manufacturer=='JKL Company': - manufacturer=3 - elif manufacturer=="GHI Company": - manufacturer=2 - elif manufacturer=="DEF Company": - manufacturer=1 - elif manufacturer=="ABC Company": - manufacturer =0 - else: - manufacturer=4 - total_operating_hours = st.sidebar.slider('Total Operating Hours)', 1000,2500, 1500 ) - Usage_Intensity = st.sidebar.slider("Usage_Intensity(hous/day)",1,10,4) - Last_Maintenance_Type = st.sidebar.selectbox("Last Maintainece Type",("Corrective","Preventive")) - if Last_Maintenance_Type =='Corrective': - Last_Maintenance_Type=0 - else: - Last_Maintenance_Type=1 - Prior_Maintenance = st.sidebar.selectbox("Prior Maintainece",("Regular","Irregular")) - if Prior_Maintenance =='Regular': - Prior_Maintenance=1 - else: - Prior_Maintenance=0 - - Average_Temperature= st.sidebar.slider('Average Temperature', 20,40, 35 ) - humidity = st.sidebar.slider('Humidity', 52,70, 55 ) - Vibration_Level = st.sidebar.slider('Vibration Level', 2,4, 2 ) - Pressure = st.sidebar.slider('Pressure', 28,32, 30 ) - Power_Input_Voltage= st.sidebar.slider('Power Input Voltage (V)',105,120,115) - Repair_Type = st.sidebar.selectbox("Repair Type",("Hardware","Software")) - if Repair_Type =='Software': - Repair_Type=1 - else: - Repair_Type=0 - load_factor = st.sidebar.number_input('Enter the Load Factor (any number between 0 to 1 )',min_value=0.0,max_value=1.0,step=0.1) - engine_speed=st.sidebar.slider('Engine Speed',7000,8000,7800) - Oil_Temperature=st.sidebar.slider('Oil Temperature',170,185,172) - - - user_report_data = { - 'Manufacturer': manufacturer, - 'Total_Operating_Hours': total_operating_hours, - 'Usage_Intensity_(hours/day)': Usage_Intensity , - 'Last_Maintenance_Type': Last_Maintenance_Type, - "Prior_Maintenance":Prior_Maintenance, - 'Average_Temperature':Average_Temperature, - 'Humidity': humidity, - 'Vibration_Level': Vibration_Level, - 'Pressure': Pressure, - 'Power_Input_Voltage': Power_Input_Voltage, - 'Repair_Type': Repair_Type , - 'Load_Factor': load_factor, - 'Engine_Speed': engine_speed, - 'Oil_Temperature':Oil_Temperature - } - report_data = pd.DataFrame(user_report_data, index=[0]) - - return report_data - -#Customer Data -user_data = user_report() -st.subheader("Component Details") -st.write(user_data) - - -# define the prediction function -def prediction(user_data): - - predicted_max_number_of_repairs = model.predict(user_data) - - # return the predicted max number of repairs as output - return np.round(predicted_max_number_of_repairs[0]) -# Function calling -y_pred = prediction(user_data) -st.write("Click here to see the Predictions") -if st.button("Predict"): - st.subheader(f"Next Failure is {y_pred} hours ") \ No newline at end of file diff --git a/spaces/TinkerFrank/AppleClassifier/README.md b/spaces/TinkerFrank/AppleClassifier/README.md deleted file mode 100644 index 382f803e1c5ca640da528b695f8ae952e02a2fce..0000000000000000000000000000000000000000 --- a/spaces/TinkerFrank/AppleClassifier/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: AppleClassifier -emoji: 🚀 -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Tinsae/CoWork/app.py b/spaces/Tinsae/CoWork/app.py deleted file mode 100644 index fc299085e5b51999c348cc1274e6a4e06022a082..0000000000000000000000000000000000000000 --- a/spaces/Tinsae/CoWork/app.py +++ /dev/null @@ -1,89 +0,0 @@ -from PIL import Image -import numpy as np -from rembg import remove -import cv2 -import os -from torchvision.transforms import GaussianBlur -import gradio as gr -import replicate -import requests -from io import BytesIO - -def create_mask(input): - input_path = 'input.png' - bg_removed_path = 'bg_removed.png' - mask_name = 'blured_mask.png' - - input.save(input_path) - bg_removed = remove(input) - - width, height = bg_removed.size - max_dim = max(width, height) - square_img = Image.new('RGB', (max_dim, max_dim), (255, 255, 255)) - paste_pos = ((max_dim - width) // 2, (max_dim - height) // 2) - square_img.paste(bg_removed, paste_pos) - - square_img = square_img.resize((512, 512)) - square_img.save(bg_removed_path) - - img2_grayscale = square_img.convert('L') - img2_a = np.array(img2_grayscale) - - mask = np.array(img2_grayscale) - threshhold = 0 - mask[img2_a==threshhold] = 1 - mask[img2_a>threshhold] = 0 - - strength = 1 - d = int(255 * (1-strength)) - mask *= 255-d - mask += d - - mask = Image.fromarray(mask) - - blur = GaussianBlur(11,20) - mask = blur(mask) - mask = mask.resize((512, 512)) - - mask.save(mask_name) - - return Image.open(mask_name) - - -def generate_image(image, product_name, target_name): - mask = create_mask(image) - image = image.resize((512, 512)) - mask = mask.resize((512,512)) - guidance_scale=16 - num_samples = 1 - - prompt = 'a product photography photo of' + product_name + ' on ' + target_name - - model = replicate.models.get("cjwbw/stable-diffusion-v2-inpainting") - version = model.versions.get("f9bb0632bfdceb83196e85521b9b55895f8ff3d1d3b487fd1973210c0eb30bec") - output = version.predict(prompt=prompt, image=open("bg_removed.png", "rb"), mask=open("blured_mask.png", "rb")) - response = requests.get(output[0]) - - return Image.open(BytesIO(response.content)) - -with gr.Blocks() as demo: - gr.Markdown("# Advertise better with AI") - # with gr.Tab("Prompt Paint - Basic"): - with gr.Row(): - - with gr.Column(): - input_image = gr.Image(label = "Upload your product's photo", type = 'pil') - - product_name = gr.Textbox(label="Describe your product") - target_name = gr.Textbox(label="Where do you want to put your product?") - # result_prompt = product_name + ' in ' + target_name + 'product photograpy ultrarealist' - - image_button = gr.Button("Generate") - - with gr.Column(): - image_output = gr.Image() - - image_button.click(generate_image, inputs=[input_image, product_name, target_name ], outputs=image_output, api_name='test') - - -demo.launch() \ No newline at end of file diff --git a/spaces/TusharNautiyal/BTC-Prediction/app.py b/spaces/TusharNautiyal/BTC-Prediction/app.py deleted file mode 100644 index 0d860dd4fd53798ca3c0cadf08cee5c7a67e802a..0000000000000000000000000000000000000000 --- a/spaces/TusharNautiyal/BTC-Prediction/app.py +++ /dev/null @@ -1,201 +0,0 @@ -import numpy as np -import pandas as pd -from Scheduler import isscheduled -from build_model import model_builder -from sklearn.preprocessing import MinMaxScaler -import matplotlib.pyplot as plt -from tensorflow import keras -import yfinance -import requests -import streamlit as st -from datetime import datetime -import os -# Getting Current Date - -def main(): - - if 'Season' not in st.session_state: - st.session_state['Season'] = True - else: - st.session_state['Season'] = False - with open("prevdays.txt",'r') as file: - previous_date = file.readline() - file.close() - with open("isdone.txt",'r') as file: - isdone = file.readline() - file.close() - isExist = os.path.exists('LSTM_build.h5') - data = requests.get("http://worldtimeapi.org/api/timezone/Asia/Kolkata") - current_date = data.json()['datetime'].split('T')[0] - if isExist == False and isdone == 'Done': - with open("isdone.txt",'w') as file: - file.write("Not Done") - file.close() - - with st.spinner(f'Wait for Model Updation It will Retrain Every 10th day from the retraining day which was on {previous_date} It Will take only 15-20 mins Please Wait **Do not close The window**...'): - builder = model_builder() - # We will Schedule Our APP to re build model after 10 days - with open('isdone.txt') as file: - isDone = file.readline() - file.close() - - - isDone = isDone.strip() - if isscheduled()==True or isExist==False and isDone != 'Done': - # Getting The data - with open('started.txt','w') as file: - file.write("True") - file.close() - X = yfinance.download('BTC-USD',start = '2017-01-01') - X = X['Close'] - data = X - # Preprocessing data - - X = X[:len(X)-30] - X_test = data[len(X):len(data)] - # Scaling - scaled = MinMaxScaler() - values = scaled.fit_transform(np.array(X).reshape(-1,1)) - X_train = values - X_test = scaled.fit_transform(np.array(X_test).reshape(-1,1)) - - # Creating Time Series data to Feed To LSTM - X_train,y_train = builder.preprocess_data(X_train,10) - X_test,y_test = builder.preprocess_data(X_test,10) - X_train = X_train[...,np.newaxis] - X_test = X_test[...,np.newaxis] - - model = builder.fit(X_train,y_train) - model.save('LSTM_build.h5',save_format = 'h5',overwrite = True) - with open('isdone.txt','w') as file: - file.write('Done') - file.close() - with open('prevdays.txt','w') as file: - data = requests.get("http://worldtimeapi.org/api/timezone/Asia/Kolkata") - date = data.json()['datetime'].split('T')[0] - file.write(date) - file.close() - - if previous_date != current_date and isscheduled()==False and isExist==True: - with open('isdone.txt','w') as file: - file.write('NotDone') - file.close() - with open("started.txt",'w') as file: - file.write("False") - file.close() - - model = keras.models.load_model('LSTM_build.h5') - elif previous_date == current_date and isdone == 'Done' and isExist == True: - model = keras.models.load_model('LSTM_build.h5') - elif isExist == False and isdone =='Done': - st.write("There Might Be some issues or the model is rebuilding in background please try and reload application after 5mins.") - if st.session_state['Season'] == True: - if int(current_date.split("-")[1])>=10 or int(current_date.split("-")[1])<=2: - st.snow() - st.session_state['season'] = 'Winter' - else: - st.balloons() - st.session_state['season']= 'Summer' - - with open("prevdays.txt",'r') as file: - previous_date = file.readline() - file.close() - st.success(f"Model Is updated On {previous_date} and will next Update after 10 days ") - - # Creating Future Outcomes - def forecast_day(): - df = yfinance.download('BTC-USD',start = '2017-01-01') - df = df.reset_index() - X_future = df['Close'].shift(-2) - X_dates = df['Date'].shift(-2) - X_dates = X_dates[len(X_dates)-12:len(X_dates)] - X_dates = X_dates[...,np.newaxis] - X_dates = builder.preprocess_data(X_dates,10) - - X_future = X_future[len(X_future)-12:len(X_future)] - scaled = MinMaxScaler() - X_future = scaled.fit_transform(np.array(X_future).reshape(-1,1)) - X_future,y_future = builder.preprocess_data(X_future,10) - - X_future = X_future[...,np.newaxis] - result = model.predict(X_future) - ans = scaled.inverse_transform(result) - if ans>df['Close'].iloc[-1]: - return 'Positive 🟢',ans - else: - return 'Negative 🔴 ',ans - - def create_sample_data(df,days): - store_index = [] - for day in range(days): - # Creating Temporary DataFrame - dt = df.index + pd.Timedelta(days = 1) - next_data = pd.DataFrame({'Close':[1]},index =[dt[-1]]) - df = pd.concat([df,next_data]) - store_index.append(dt[-1]) - return df,store_index - - # This function Forecast Prices For 10 Days or less. - def forecast_timeline(X,days): - if days>10: - return False - final_values = [] - temp_data = X.iloc[-1] - for day in range(days): - X = X.shift(-2) - X_future = X[len(X)-12:len(X)] - X = X.dropna() - scaled = MinMaxScaler() - X_future = scaled.fit_transform(np.array(X_future).reshape(-1,1)) - X_future,y_future = builder.preprocess_data(X_future,10) - result = model.predict(X_future) - X = X.to_list() - X.append(scaled.inverse_transform(result).reshape(1)[0]) - X = pd.Series(X) - final_values.append(scaled.inverse_transform(result).reshape(1)[0]) - final_values.insert(0,temp_data) - return final_values - - def predict_future(days): - df = yfinance.download('BTC-USD',start = '2017-01-01') - df.reset_index(inplace = True) - X = df['Close'] - future_Values = forecast_timeline(X,days) - df = df[['Close','Date']] - df.set_index('Date',inplace = True) - final_df,store_index = create_sample_data(df,days) - for i,index in enumerate(store_index): - final_df['Close'].loc[index] = future_Values[i] - - return final_df,future_Values - - - - st.title('Bitcoin Price Prediction ₿') - st.write(f"HOWDY! Wonderfull ***{st.session_state['season']}*** Season. Welcome to Bitcoin ( ₿ ) Price Prediction APP It will Predict Closing Price For Bitcoin. These Predictions are based on **LSTM** Model Trained Over Historical Bitcoin Data From **2017 till {previous_date}** . the Model retratins every 10th day, The Prediction are totally based on previous Closing Values so do not invest money based on Such Predictions. Its only for Educational Purposes and should not be used for finacial purpose.") - st.write("Why LSTM ? Because it Performed well on the data, I used LSTM,ArimaMax,SariMax,Temporal Fusion transformer,FbProphet, NeuralProphet many different time series model for predictions in which lstm performed the best so I Selected LSTM. If you want To check how we came to conclusion. Check out https://shorturl.at/cwHM4 For Code.") - one_day = False - days = int(st.number_input('Enter no of Days for Prediction')) - st.write("or you can Select one day prediction from below ***IMPORTANT*** After **Checkbox is Clicked** Do Not Press Submit It Will automatically Run") - if st.checkbox(label = 'One Day Prediction'): - one_day = True - if one_day == True: - days = 1 - if st.button('Submit') and days<=10 and days>0: - dataframe,values = predict_future(days) - data = requests.get("http://worldtimeapi.org/api/timezone/Asia/Kolkata") - date = data.json()['datetime'].split('T')[0] - st.line_chart(data=dataframe) - for i in range(len(values)-1): - if values[i+1]>values[i]: - st.write(f'Day{i+1}:Positive Growth 🟢') - else: - st.write(f'Day{i+1}:Negative Growth 🔴') - if days == 1: - result,ans = forecast_day() - st.markdown(f"Tommorow **Bitcoin** Can Show {result} Movement.") - st.markdown(f"And Price Can Be Around : $ {ans.reshape(1)[0]}.") - elif days>10 or days<0: - st.write("Please Renter Days As you have exceeded 10 days limit or the input is too small, If you think everything is correct still it's showing wront output please check if you are entering any spaces while input or send us feedback at info@tusharnautiyal.ml") -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/Uday007/startup-profit-predictor/README.md b/spaces/Uday007/startup-profit-predictor/README.md deleted file mode 100644 index db02a1192dd78f01fde5a98c6325bdf86349562c..0000000000000000000000000000000000000000 --- a/spaces/Uday007/startup-profit-predictor/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Startup Profit Predictor -emoji: 🌍 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: cc-by-nc-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Vegecken/sovits4dzl/hubert/__init__.py b/spaces/Vegecken/sovits4dzl/hubert/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Vision-CAIR/minigpt4/minigpt4/models/modeling_llama.py b/spaces/Vision-CAIR/minigpt4/minigpt4/models/modeling_llama.py deleted file mode 100644 index 2cebacf454dd9cf82823fc372d7673142b2082c7..0000000000000000000000000000000000000000 --- a/spaces/Vision-CAIR/minigpt4/minigpt4/models/modeling_llama.py +++ /dev/null @@ -1,772 +0,0 @@ -# coding=utf-8 -# Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved. -# -# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX -# and OPT implementations in this library. It has been modified from its -# original forms to accommodate minor architectural differences compared -# to GPT-NeoX and OPT used by the Meta AI team that trained the model. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" PyTorch LLaMA model.""" -import math -from typing import List, Optional, Tuple, Union - -import torch -import torch.utils.checkpoint -from torch import nn -from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss - -from transformers.activations import ACT2FN -from transformers.modeling_outputs import BaseModelOutputWithPast, CausalLMOutputWithPast, SequenceClassifierOutputWithPast -from transformers.modeling_utils import PreTrainedModel -from transformers.utils import add_start_docstrings, add_start_docstrings_to_model_forward, logging, replace_return_docstrings -from transformers.models.llama.configuration_llama import LlamaConfig - - -logger = logging.get_logger(__name__) - -_CONFIG_FOR_DOC = "LlamaConfig" - - -# Copied from transformers.models.bart.modeling_bart._make_causal_mask -def _make_causal_mask( - input_ids_shape: torch.Size, dtype: torch.dtype, device: torch.device, past_key_values_length: int = 0 -): - """ - Make causal mask used for bi-directional self-attention. - """ - bsz, tgt_len = input_ids_shape - mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min, device=device), device=device) - mask_cond = torch.arange(mask.size(-1), device=device) - mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0) - mask = mask.to(dtype) - - if past_key_values_length > 0: - mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype, device=device), mask], dim=-1) - return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length) - - -# Copied from transformers.models.bart.modeling_bart._expand_mask -def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None): - """ - Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`. - """ - bsz, src_len = mask.size() - tgt_len = tgt_len if tgt_len is not None else src_len - - expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype) - - inverted_mask = 1.0 - expanded_mask - - return inverted_mask.masked_fill(inverted_mask.to(torch.bool), torch.finfo(dtype).min) - - -class LlamaRMSNorm(nn.Module): - def __init__(self, hidden_size, eps=1e-6): - """ - LlamaRMSNorm is equivalent to T5LayerNorm - """ - super().__init__() - self.weight = nn.Parameter(torch.ones(hidden_size)) - self.variance_epsilon = eps - - def forward(self, hidden_states): - variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True) - hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon) - - # convert into half-precision if necessary - if self.weight.dtype in [torch.float16, torch.bfloat16]: - hidden_states = hidden_states.to(self.weight.dtype) - - return self.weight * hidden_states - - -class LlamaRotaryEmbedding(torch.nn.Module): - def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None): - super().__init__() - inv_freq = 1.0 / (base ** (torch.arange(0, dim, 2).float().to(device) / dim)) - self.register_buffer("inv_freq", inv_freq) - - # Build here to make `torch.jit.trace` work. - self.max_seq_len_cached = max_position_embeddings - t = torch.arange(self.max_seq_len_cached, device=self.inv_freq.device, dtype=self.inv_freq.dtype) - freqs = torch.einsum("i,j->ij", t, self.inv_freq) - # Different from paper, but it uses a different permutation in order to obtain the same calculation - emb = torch.cat((freqs, freqs), dim=-1) - self.register_buffer("cos_cached", emb.cos()[None, None, :, :], persistent=False) - self.register_buffer("sin_cached", emb.sin()[None, None, :, :], persistent=False) - - def forward(self, x, seq_len=None): - # x: [bs, num_attention_heads, seq_len, head_size] - # This `if` block is unlikely to be run after we build sin/cos in `__init__`. Keep the logic here just in case. - if seq_len > self.max_seq_len_cached: - self.max_seq_len_cached = seq_len - t = torch.arange(self.max_seq_len_cached, device=x.device, dtype=self.inv_freq.dtype) - freqs = torch.einsum("i,j->ij", t, self.inv_freq) - # Different from paper, but it uses a different permutation in order to obtain the same calculation - emb = torch.cat((freqs, freqs), dim=-1).to(x.device) - self.register_buffer("cos_cached", emb.cos()[None, None, :, :], persistent=False) - self.register_buffer("sin_cached", emb.sin()[None, None, :, :], persistent=False) - return ( - self.cos_cached[:, :, :seq_len, ...].to(dtype=x.dtype), - self.sin_cached[:, :, :seq_len, ...].to(dtype=x.dtype), - ) - - -def rotate_half(x): - """Rotates half the hidden dims of the input.""" - x1 = x[..., : x.shape[-1] // 2] - x2 = x[..., x.shape[-1] // 2 :] - return torch.cat((-x2, x1), dim=-1) - - -def apply_rotary_pos_emb(q, k, cos, sin, position_ids): - gather_indices = position_ids[:, None, :, None] # [bs, 1, seq_len, 1] - gather_indices = gather_indices.repeat(1, cos.shape[1], 1, cos.shape[3]) - cos = torch.gather(cos.repeat(gather_indices.shape[0], 1, 1, 1), 2, gather_indices) - sin = torch.gather(sin.repeat(gather_indices.shape[0], 1, 1, 1), 2, gather_indices) - q_embed = (q * cos) + (rotate_half(q) * sin) - k_embed = (k * cos) + (rotate_half(k) * sin) - return q_embed, k_embed - - -class LlamaMLP(nn.Module): - def __init__( - self, - hidden_size: int, - intermediate_size: int, - hidden_act: str, - ): - super().__init__() - self.gate_proj = nn.Linear(hidden_size, intermediate_size, bias=False) - self.down_proj = nn.Linear(intermediate_size, hidden_size, bias=False) - self.up_proj = nn.Linear(hidden_size, intermediate_size, bias=False) - self.act_fn = ACT2FN[hidden_act] - - def forward(self, x): - return self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x)) - - -class LlamaAttention(nn.Module): - """Multi-headed attention from 'Attention Is All You Need' paper""" - - def __init__(self, config: LlamaConfig): - super().__init__() - self.config = config - self.hidden_size = config.hidden_size - self.num_heads = config.num_attention_heads - self.head_dim = self.hidden_size // self.num_heads - self.max_position_embeddings = config.max_position_embeddings - - if (self.head_dim * self.num_heads) != self.hidden_size: - raise ValueError( - f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}" - f" and `num_heads`: {self.num_heads})." - ) - self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False) - self.k_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False) - self.v_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False) - self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=False) - self.rotary_emb = LlamaRotaryEmbedding(self.head_dim, max_position_embeddings=self.max_position_embeddings) - - def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int): - return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous() - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - past_key_value: Optional[Tuple[torch.Tensor]] = None, - output_attentions: bool = False, - use_cache: bool = False, - ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: - bsz, q_len, _ = hidden_states.size() - - query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) - key_states = self.k_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) - value_states = self.v_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) - - kv_seq_len = key_states.shape[-2] - if past_key_value is not None: - kv_seq_len += past_key_value[0].shape[-2] - cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len) - query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids) - # [bsz, nh, t, hd] - - if past_key_value is not None: - # reuse k, v, self_attention - key_states = torch.cat([past_key_value[0], key_states], dim=2) - value_states = torch.cat([past_key_value[1], value_states], dim=2) - - past_key_value = (key_states, value_states) if use_cache else None - - attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim) - - if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len): - raise ValueError( - f"Attention weights should be of size {(bsz * self.num_heads, q_len, kv_seq_len)}, but is" - f" {attn_weights.size()}" - ) - - if attention_mask is not None: - if attention_mask.size() != (bsz, 1, q_len, kv_seq_len): - raise ValueError( - f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}" - ) - attn_weights = attn_weights + attention_mask - attn_weights = torch.max(attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min)) - - # upcast attention to fp32 - attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype) - attn_output = torch.matmul(attn_weights, value_states) - - if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim): - raise ValueError( - f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is" - f" {attn_output.size()}" - ) - - attn_output = attn_output.transpose(1, 2) - attn_output = attn_output.reshape(bsz, q_len, self.hidden_size) - - attn_output = self.o_proj(attn_output) - - if not output_attentions: - attn_weights = None - - return attn_output, attn_weights, past_key_value - - -class LlamaDecoderLayer(nn.Module): - def __init__(self, config: LlamaConfig): - super().__init__() - self.hidden_size = config.hidden_size - self.self_attn = LlamaAttention(config=config) - self.mlp = LlamaMLP( - hidden_size=self.hidden_size, - intermediate_size=config.intermediate_size, - hidden_act=config.hidden_act, - ) - self.input_layernorm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps) - self.post_attention_layernorm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps) - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - past_key_value: Optional[Tuple[torch.Tensor]] = None, - output_attentions: Optional[bool] = False, - use_cache: Optional[bool] = False, - ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]: - """ - Args: - hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` - attention_mask (`torch.FloatTensor`, *optional*): attention mask of size - `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more detail. - use_cache (`bool`, *optional*): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding - (see `past_key_values`). - past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states - """ - - residual = hidden_states - - hidden_states = self.input_layernorm(hidden_states) - - # Self Attention - hidden_states, self_attn_weights, present_key_value = self.self_attn( - hidden_states=hidden_states, - attention_mask=attention_mask, - position_ids=position_ids, - past_key_value=past_key_value, - output_attentions=output_attentions, - use_cache=use_cache, - ) - hidden_states = residual + hidden_states - - # Fully Connected - residual = hidden_states - hidden_states = self.post_attention_layernorm(hidden_states) - hidden_states = self.mlp(hidden_states) - hidden_states = residual + hidden_states - - outputs = (hidden_states,) - - if output_attentions: - outputs += (self_attn_weights,) - - if use_cache: - outputs += (present_key_value,) - - return outputs - - -LLAMA_START_DOCSTRING = r""" - This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. - Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage - and behavior. - - Parameters: - config ([`LlamaConfig`]): - Model configuration class with all the parameters of the model. Initializing with a config file does not - load the weights associated with the model, only the configuration. Check out the - [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - - -@add_start_docstrings( - "The bare LLaMA Model outputting raw hidden-states without any specific head on top.", - LLAMA_START_DOCSTRING, -) -class LlamaPreTrainedModel(PreTrainedModel): - config_class = LlamaConfig - base_model_prefix = "model" - supports_gradient_checkpointing = True - _no_split_modules = ["LlamaDecoderLayer"] - _keys_to_ignore_on_load_unexpected = [r"decoder\.version"] - - def _init_weights(self, module): - std = self.config.initializer_range - if isinstance(module, nn.Linear): - module.weight.data.normal_(mean=0.0, std=std) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.Embedding): - module.weight.data.normal_(mean=0.0, std=std) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, LlamaModel): - module.gradient_checkpointing = value - - -LLAMA_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): - Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide - it. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - If `past_key_values` is used, optionally only the last `decoder_input_ids` have to be input (see - `past_key_values`). - - If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`] - and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more - information on the default strategy. - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.n_positions - 1]`. - - [What are position IDs?](../glossary#position-ids) - past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): - Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape - `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape - `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`. - - Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention - blocks) that can be used (see `past_key_values` input) to speed up sequential decoding. - - If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that - don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all - `decoder_input_ids` of shape `(batch_size, sequence_length)`. - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This - is useful if you want more control over how to convert `input_ids` indices into associated vectors than the - model's internal embedding lookup matrix. - use_cache (`bool`, *optional*): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see - `past_key_values`). - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - - -@add_start_docstrings( - "The bare LLaMA Model outputting raw hidden-states without any specific head on top.", - LLAMA_START_DOCSTRING, -) -class LlamaModel(LlamaPreTrainedModel): - """ - Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`LlamaDecoderLayer`] - - Args: - config: LlamaConfig - """ - - def __init__(self, config: LlamaConfig): - super().__init__(config) - self.padding_idx = config.pad_token_id - self.vocab_size = config.vocab_size - - self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx) - self.layers = nn.ModuleList([LlamaDecoderLayer(config) for _ in range(config.num_hidden_layers)]) - self.norm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps) - - self.gradient_checkpointing = False - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self): - return self.embed_tokens - - def set_input_embeddings(self, value): - self.embed_tokens = value - - # Copied from transformers.models.bart.modeling_bart.BartDecoder._prepare_decoder_attention_mask - def _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length): - # create causal mask - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] - combined_attention_mask = None - if input_shape[-1] > 1: - combined_attention_mask = _make_causal_mask( - input_shape, - inputs_embeds.dtype, - device=inputs_embeds.device, - past_key_values_length=past_key_values_length, - ) - - if attention_mask is not None: - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] - expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]).to( - inputs_embeds.device - ) - combined_attention_mask = ( - expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask - ) - - return combined_attention_mask - - @add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING) - def forward( - self, - input_ids: torch.LongTensor = None, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[List[torch.FloatTensor]] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - query_embeds: Optional[torch.FloatTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutputWithPast]: - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - use_cache = use_cache if use_cache is not None else self.config.use_cache - - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - # retrieve input_ids and inputs_embeds - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time") - elif input_ids is not None: - batch_size, seq_length = input_ids.shape - elif inputs_embeds is not None: - batch_size, seq_length, _ = inputs_embeds.shape - else: - raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds") - - if inputs_embeds is None: - inputs_embeds = self.embed_tokens(input_ids) - if query_embeds is not None: - inputs_embeds = torch.cat([query_embeds, inputs_embeds], dim=1) - batch_size, seq_length, _ = inputs_embeds.shape - - seq_length_with_past = seq_length - past_key_values_length = 0 - - if past_key_values is not None: - past_key_values_length = past_key_values[0][0].shape[2] - seq_length_with_past = seq_length_with_past + past_key_values_length - - if position_ids is None: - device = input_ids.device if input_ids is not None else inputs_embeds.device - position_ids = torch.arange( - past_key_values_length, seq_length + past_key_values_length, dtype=torch.long, device=device - ) - position_ids = position_ids.unsqueeze(0).view(-1, seq_length) - else: - position_ids = position_ids.view(-1, seq_length).long() - - # embed positions - if attention_mask is None: - attention_mask = torch.ones( - (batch_size, seq_length_with_past), dtype=torch.bool, device=inputs_embeds.device - ) - attention_mask = self._prepare_decoder_attention_mask( - attention_mask, (batch_size, seq_length), inputs_embeds, past_key_values_length - ) - - hidden_states = inputs_embeds - - if self.gradient_checkpointing and self.training: - if use_cache: - logger.warning_once( - "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." - ) - use_cache = False - - # decoder layers - all_hidden_states = () if output_hidden_states else None - all_self_attns = () if output_attentions else None - next_decoder_cache = () if use_cache else None - - for idx, decoder_layer in enumerate(self.layers): - if output_hidden_states: - all_hidden_states += (hidden_states,) - - past_key_value = past_key_values[idx] if past_key_values is not None else None - - if self.gradient_checkpointing and self.training: - - def create_custom_forward(module): - def custom_forward(*inputs): - # None for past_key_value - return module(*inputs, output_attentions, None) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(decoder_layer), - hidden_states, - attention_mask, - position_ids, - None, - ) - else: - layer_outputs = decoder_layer( - hidden_states, - attention_mask=attention_mask, - position_ids=position_ids, - past_key_value=past_key_value, - output_attentions=output_attentions, - use_cache=use_cache, - ) - - hidden_states = layer_outputs[0] - - if use_cache: - next_decoder_cache += (layer_outputs[2 if output_attentions else 1],) - - if output_attentions: - all_self_attns += (layer_outputs[1],) - - hidden_states = self.norm(hidden_states) - - # add hidden states from the last decoder layer - if output_hidden_states: - all_hidden_states += (hidden_states,) - - next_cache = next_decoder_cache if use_cache else None - if not return_dict: - return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None) - return BaseModelOutputWithPast( - last_hidden_state=hidden_states, - past_key_values=next_cache, - hidden_states=all_hidden_states, - attentions=all_self_attns, - ) - - -class LlamaForCausalLM(LlamaPreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.model = LlamaModel(config) - - self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) - - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self): - return self.model.embed_tokens - - def set_input_embeddings(self, value): - self.model.embed_tokens = value - - def get_output_embeddings(self): - return self.lm_head - - def set_output_embeddings(self, new_embeddings): - self.lm_head = new_embeddings - - def set_decoder(self, decoder): - self.model = decoder - - def get_decoder(self): - return self.model - - @add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids: torch.LongTensor = None, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[List[torch.FloatTensor]] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - query_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, CausalLMOutputWithPast]: - r""" - Args: - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., - config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored - (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`. - - Returns: - - Example: - - ```python - >>> from transformers import AutoTokenizer, LlamaForCausalLM - - >>> model = LlamaForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS) - >>> tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER) - - >>> prompt = "Hey, are you consciours? Can you talk to me?" - >>> inputs = tokenizer(prompt, return_tensors="pt") - - >>> # Generate - >>> generate_ids = model.generate(inputs.input_ids, max_length=30) - >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] - "Hey, are you consciours? Can you talk to me?\nI'm not consciours, but I can talk to you." - ```""" - - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn) - outputs = self.model( - input_ids=input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - past_key_values=past_key_values, - inputs_embeds=inputs_embeds, - query_embeds=query_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - hidden_states = outputs[0] - logits = self.lm_head(hidden_states) - - loss = None - if labels is not None: - # Shift so that tokens < n predict n - shift_logits = logits[..., :-1, :].contiguous() - shift_labels = labels[..., 1:].contiguous() - # Flatten the tokens - loss_fct = CrossEntropyLoss() - shift_logits = shift_logits.view(-1, self.config.vocab_size) - shift_labels = shift_labels.view(-1) - # Enable model parallelism - shift_labels = shift_labels.to(shift_logits.device) - loss = loss_fct(shift_logits, shift_labels) - - if not return_dict: - output = (logits,) + outputs[1:] - return (loss,) + output if loss is not None else output - - return CausalLMOutputWithPast( - loss=loss, - logits=logits, - past_key_values=outputs.past_key_values, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - def prepare_inputs_for_generation( - self, input_ids, query_embeds=None, past_key_values=None, attention_mask=None, inputs_embeds=None, **kwargs - ): - if past_key_values: - input_ids = input_ids[:, -1:] - - position_ids = kwargs.get("position_ids", None) - if attention_mask is not None and position_ids is None: - # create position_ids on the fly for batch generation - position_ids = attention_mask.long().cumsum(-1) - 1 - position_ids.masked_fill_(attention_mask == 0, 1) - if past_key_values: - position_ids = position_ids[:, -1].unsqueeze(-1) - query_embeds = None - - # if `inputs_embeds` are passed, we only want to use them in the 1st generation step - if inputs_embeds is not None and past_key_values is None: - model_inputs = {"inputs_embeds": inputs_embeds} - else: - model_inputs = {"input_ids": input_ids} - - model_inputs.update( - { - "position_ids": position_ids, - "query_embeds": query_embeds, - "past_key_values": past_key_values, - "use_cache": kwargs.get("use_cache"), - "attention_mask": attention_mask, - } - ) - return model_inputs - - @staticmethod - def _reorder_cache(past_key_values, beam_idx): - reordered_past = () - for layer_past in past_key_values: - reordered_past += (tuple(past_state.index_select(0, beam_idx) for past_state in layer_past),) - return reordered_past - diff --git a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/callbacks/tensorboard.py b/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/callbacks/tensorboard.py deleted file mode 100644 index a24d0799a74c3eb03313ad2ddbd2cb916803d6de..0000000000000000000000000000000000000000 --- a/spaces/XS-1/BW_IMAGE_VIDEO_COLORIZER/fastai/callbacks/tensorboard.py +++ /dev/null @@ -1,427 +0,0 @@ -"Provides convenient callbacks for Learners that write model images, metrics/losses, stats and histograms to Tensorboard" -from ..basic_train import Learner -from ..basic_data import DatasetType, DataBunch -from ..vision import Image -from ..vision.gan import GANLearner -from ..callbacks import LearnerCallback -from ..core import * -from ..torch_core import * -from threading import Thread, Event -from time import sleep -from queue import Queue -import statistics -import torchvision.utils as vutils -from abc import ABC -#This is an optional dependency in fastai. Must install separately. -try: from tensorboardX import SummaryWriter -except: print("To use this tracker, please run 'pip install tensorboardx'. Also you must have Tensorboard running to see results") - -__all__=['LearnerTensorboardWriter', 'GANTensorboardWriter', 'ImageGenTensorboardWriter'] - -#---Example usage (applies to any of the callbacks)--- -# proj_id = 'Colorize' -# tboard_path = Path('data/tensorboard/' + proj_id) -# learn.callback_fns.append(partial(GANTensorboardWriter, base_dir=tboard_path, name='GanLearner')) - -class LearnerTensorboardWriter(LearnerCallback): - "Broadly useful callback for Learners that writes to Tensorboard. Writes model histograms, losses/metrics, and gradient stats." - def __init__(self, learn:Learner, base_dir:Path, name:str, loss_iters:int=25, hist_iters:int=500, stats_iters:int=100): - super().__init__(learn=learn) - self.base_dir,self.name,self.loss_iters,self.hist_iters,self.stats_iters = base_dir,name,loss_iters,hist_iters,stats_iters - log_dir = base_dir/name - self.tbwriter = SummaryWriter(str(log_dir)) - self.hist_writer = HistogramTBWriter() - self.stats_writer = ModelStatsTBWriter() - #self.graph_writer = GraphTBWriter() - self.data = None - self.metrics_root = '/metrics/' - self._update_batches_if_needed() - - def _get_new_batch(self, ds_type:DatasetType)->Collection[Tensor]: - "Retrieves new batch of DatasetType, and detaches it." - return self.learn.data.one_batch(ds_type=ds_type, detach=True, denorm=False, cpu=False) - - def _update_batches_if_needed(self)->None: - "one_batch function is extremely slow with large datasets. This is caching the result as an optimization." - if self.learn.data.valid_dl is None: return # Running learning rate finder, so return - update_batches = self.data is not self.learn.data - if not update_batches: return - self.data = self.learn.data - self.trn_batch = self._get_new_batch(ds_type=DatasetType.Train) - self.val_batch = self._get_new_batch(ds_type=DatasetType.Valid) - - def _write_model_stats(self, iteration:int)->None: - "Writes gradient statistics to Tensorboard." - self.stats_writer.write(model=self.learn.model, iteration=iteration, tbwriter=self.tbwriter) - - def _write_training_loss(self, iteration:int, last_loss:Tensor)->None: - "Writes training loss to Tensorboard." - scalar_value = to_np(last_loss) - tag = self.metrics_root + 'train_loss' - self.tbwriter.add_scalar(tag=tag, scalar_value=scalar_value, global_step=iteration) - - def _write_weight_histograms(self, iteration:int)->None: - "Writes model weight histograms to Tensorboard." - self.hist_writer.write(model=self.learn.model, iteration=iteration, tbwriter=self.tbwriter) - - def _write_scalar(self, name:str, scalar_value, iteration:int)->None: - "Writes single scalar value to Tensorboard." - tag = self.metrics_root + name - self.tbwriter.add_scalar(tag=tag, scalar_value=scalar_value, global_step=iteration) - - #TODO: Relying on a specific hardcoded start_idx here isn't great. Is there a better solution? - def _write_metrics(self, iteration:int, last_metrics:MetricsList, start_idx:int=2)->None: - "Writes training metrics to Tensorboard." - recorder = self.learn.recorder - for i, name in enumerate(recorder.names[start_idx:]): - if last_metrics is None or len(last_metrics) < i+1: return - scalar_value = last_metrics[i] - self._write_scalar(name=name, scalar_value=scalar_value, iteration=iteration) - - def on_train_begin(self, **kwargs: Any) -> None: - #self.graph_writer.write(model=self.learn.model, tbwriter=self.tbwriter, - #input_to_model=next(iter(self.learn.data.dl(DatasetType.Single)))[0]) - return - - def on_batch_end(self, last_loss:Tensor, iteration:int, **kwargs)->None: - "Callback function that writes batch end appropriate data to Tensorboard." - if iteration == 0: return - self._update_batches_if_needed() - if iteration % self.loss_iters == 0: self._write_training_loss(iteration=iteration, last_loss=last_loss) - if iteration % self.hist_iters == 0: self._write_weight_histograms(iteration=iteration) - - # Doing stuff here that requires gradient info, because they get zeroed out afterwards in training loop - def on_backward_end(self, iteration:int, **kwargs)->None: - "Callback function that writes backward end appropriate data to Tensorboard." - if iteration == 0: return - self._update_batches_if_needed() - if iteration % self.stats_iters == 0: self._write_model_stats(iteration=iteration) - - def on_epoch_end(self, last_metrics:MetricsList, iteration:int, **kwargs)->None: - "Callback function that writes epoch end appropriate data to Tensorboard." - self._write_metrics(iteration=iteration, last_metrics=last_metrics) - -# TODO: We're overriding almost everything here. Seems like a good idea to question that ("is a" vs "has a") -class GANTensorboardWriter(LearnerTensorboardWriter): - "Callback for GANLearners that writes to Tensorboard. Extends LearnerTensorboardWriter and adds output image writes." - def __init__(self, learn:GANLearner, base_dir:Path, name:str, loss_iters:int=25, hist_iters:int=500, - stats_iters:int=100, visual_iters:int=100): - super().__init__(learn=learn, base_dir=base_dir, name=name, loss_iters=loss_iters, hist_iters=hist_iters, stats_iters=stats_iters) - self.visual_iters = visual_iters - self.img_gen_vis = ImageTBWriter() - self.gen_stats_updated = True - self.crit_stats_updated = True - - def _write_weight_histograms(self, iteration:int)->None: - "Writes model weight histograms to Tensorboard." - generator, critic = self.learn.gan_trainer.generator, self.learn.gan_trainer.critic - self.hist_writer.write(model=generator, iteration=iteration, tbwriter=self.tbwriter, name='generator') - self.hist_writer.write(model=critic, iteration=iteration, tbwriter=self.tbwriter, name='critic') - - def _write_gen_model_stats(self, iteration:int)->None: - "Writes gradient statistics for generator to Tensorboard." - generator = self.learn.gan_trainer.generator - self.stats_writer.write(model=generator, iteration=iteration, tbwriter=self.tbwriter, name='gen_model_stats') - self.gen_stats_updated = True - - def _write_critic_model_stats(self, iteration:int)->None: - "Writes gradient statistics for critic to Tensorboard." - critic = self.learn.gan_trainer.critic - self.stats_writer.write(model=critic, iteration=iteration, tbwriter=self.tbwriter, name='crit_model_stats') - self.crit_stats_updated = True - - def _write_model_stats(self, iteration:int)->None: - "Writes gradient statistics to Tensorboard." - # We don't want to write stats when model is not iterated on and hence has zeroed out gradients - gen_mode = self.learn.gan_trainer.gen_mode - if gen_mode and not self.gen_stats_updated: self._write_gen_model_stats(iteration=iteration) - if not gen_mode and not self.crit_stats_updated: self._write_critic_model_stats(iteration=iteration) - - def _write_training_loss(self, iteration:int, last_loss:Tensor)->None: - "Writes training loss to Tensorboard." - recorder = self.learn.gan_trainer.recorder - if len(recorder.losses) == 0: return - scalar_value = to_np((recorder.losses[-1:])[0]) - tag = self.metrics_root + 'train_loss' - self.tbwriter.add_scalar(tag=tag, scalar_value=scalar_value, global_step=iteration) - - def _write_images(self, iteration:int)->None: - "Writes model generated, original and real images to Tensorboard." - trainer = self.learn.gan_trainer - #TODO: Switching gen_mode temporarily seems a bit hacky here. Certainly not a good side-effect. Is there a better way? - gen_mode = trainer.gen_mode - try: - trainer.switch(gen_mode=True) - self.img_gen_vis.write(learn=self.learn, trn_batch=self.trn_batch, val_batch=self.val_batch, - iteration=iteration, tbwriter=self.tbwriter) - finally: trainer.switch(gen_mode=gen_mode) - - def on_batch_end(self, iteration:int, **kwargs)->None: - "Callback function that writes batch end appropriate data to Tensorboard." - super().on_batch_end(iteration=iteration, **kwargs) - if iteration == 0: return - if iteration % self.visual_iters == 0: self._write_images(iteration=iteration) - - def on_backward_end(self, iteration:int, **kwargs)->None: - "Callback function that writes backward end appropriate data to Tensorboard." - if iteration == 0: return - self._update_batches_if_needed() - #TODO: This could perhaps be implemented as queues of requests instead but that seemed like overkill. - # But I'm not the biggest fan of maintaining these boolean flags either... Review pls. - if iteration % self.stats_iters == 0: self.gen_stats_updated, self.crit_stats_updated = False, False - if not (self.gen_stats_updated and self.crit_stats_updated): self._write_model_stats(iteration=iteration) - -class ImageGenTensorboardWriter(LearnerTensorboardWriter): - "Callback for non-GAN image generating Learners that writes to Tensorboard. Extends LearnerTensorboardWriter and adds output image writes." - def __init__(self, learn:Learner, base_dir:Path, name:str, loss_iters:int=25, hist_iters:int=500, stats_iters:int=100, - visual_iters:int=100): - super().__init__(learn=learn, base_dir=base_dir, name=name, loss_iters=loss_iters, hist_iters=hist_iters, - stats_iters=stats_iters) - self.visual_iters = visual_iters - self.img_gen_vis = ImageTBWriter() - - def _write_images(self, iteration:int)->None: - "Writes model generated, original and real images to Tensorboard" - self.img_gen_vis.write(learn=self.learn, trn_batch=self.trn_batch, val_batch=self.val_batch, iteration=iteration, - tbwriter=self.tbwriter) - - def on_batch_end(self, iteration:int, **kwargs)->None: - "Callback function that writes batch end appropriate data to Tensorboard." - super().on_batch_end(iteration=iteration, **kwargs) - if iteration == 0: return - if iteration % self.visual_iters == 0: - self._write_images(iteration=iteration) - -class TBWriteRequest(ABC): - "A request object for Tensorboard writes. Useful for queuing up and executing asynchronous writes." - def __init__(self, tbwriter: SummaryWriter, iteration:int): - super().__init__() - self.tbwriter = tbwriter - self.iteration = iteration - - @abstractmethod - def write(self)->None: pass - -# SummaryWriter writes tend to block quite a bit. This gets around that and greatly boosts performance. -# Not all tensorboard writes are using this- just the ones that take a long time. Note that the -# SummaryWriter does actually use a threadsafe consumer/producer design ultimately to write to Tensorboard, -# so writes done outside of this async loop should be fine. -class AsyncTBWriter(): - "Callback for GANLearners that writes to Tensorboard. Extends LearnerTensorboardWriter and adds output image writes." - def __init__(self): - super().__init__() - self.stop_request = Event() - self.queue = Queue() - self.thread = Thread(target=self._queue_processor, daemon=True) - self.thread.start() - - def request_write(self, request: TBWriteRequest)->None: - "Queues up an asynchronous write request to Tensorboard." - if self.stop_request.isSet(): return - self.queue.put(request) - - def _queue_processor(self)->None: - "Processes queued up write requests asynchronously to Tensorboard." - while not self.stop_request.isSet(): - while not self.queue.empty(): - if self.stop_request.isSet(): return - request = self.queue.get() - request.write() - sleep(0.2) - - #Provided this to stop thread explicitly or by context management (with statement) but thread should end on its own - # upon program exit, due to being a daemon. So using this is probably unecessary. - def close(self)->None: - "Stops asynchronous request queue processing thread." - self.stop_request.set() - self.thread.join() - - # Nothing to do, thread already started. Could start thread here to enforce use of context manager - # (but that sounds like a pain and a bit unweildy and unecessary for actual usage) - def __enter__(self): pass - - def __exit__(self, exc_type, exc_value, traceback): self.close() - -asyncTBWriter = AsyncTBWriter() - -class ModelImageSet(): - "Convenience object that holds the original, real(target) and generated versions of a single image fed to a model." - @staticmethod - def get_list_from_model(learn:Learner, ds_type:DatasetType, batch:Tuple)->[]: - "Factory method to convert a batch of model images to a list of ModelImageSet." - image_sets = [] - x,y = batch[0],batch[1] - preds=[] - preds = learn.pred_batch(ds_type=ds_type, batch=(x,y), reconstruct=True) - for orig_px, real_px, gen in zip(x,y,preds): - orig, real = Image(px=orig_px), Image(px=real_px) - image_set = ModelImageSet(orig=orig, real=real, gen=gen) - image_sets.append(image_set) - return image_sets - - def __init__(self, orig:Image, real:Image, gen:Image): self.orig, self.real, self.gen = orig, real, gen - -class HistogramTBRequest(TBWriteRequest): - "Request object for model histogram writes to Tensorboard." - def __init__(self, model:nn.Module, iteration:int, tbwriter:SummaryWriter, name:str): - super().__init__(tbwriter=tbwriter, iteration=iteration) - self.params = [(name, values.clone().detach().cpu()) for (name, values) in model.named_parameters()] - self.name = name - - def _write_histogram(self, param_name:str, values)->None: - "Writes single model histogram to Tensorboard." - tag = self.name + '/weights/' + param_name - self.tbwriter.add_histogram(tag=tag, values=values, global_step=self.iteration) - - def write(self)->None: - "Writes model histograms to Tensorboard." - for param_name, values in self.params: self._write_histogram(param_name=param_name, values=values) - -#If this isn't done async then this is sloooooow -class HistogramTBWriter(): - "Writes model histograms to Tensorboard." - def __init__(self): super().__init__() - - def write(self, model:nn.Module, iteration:int, tbwriter:SummaryWriter, name:str='model')->None: - "Writes model histograms to Tensorboard." - request = HistogramTBRequest(model=model, iteration=iteration, tbwriter=tbwriter, name=name) - asyncTBWriter.request_write(request) - -class ModelStatsTBRequest(TBWriteRequest): - "Request object for model gradient statistics writes to Tensorboard." - def __init__(self, model:nn.Module, iteration:int, tbwriter:SummaryWriter, name:str): - super().__init__(tbwriter=tbwriter, iteration=iteration) - self.gradients = [x.grad.clone().detach().cpu() for x in model.parameters() if x.grad is not None] - self.name = name - - def _add_gradient_scalar(self, name:str, scalar_value)->None: - "Writes a single scalar value for a gradient statistic to Tensorboard." - tag = self.name + '/gradients/' + name - self.tbwriter.add_scalar(tag=tag, scalar_value=scalar_value, global_step=self.iteration) - - def _write_avg_norm(self, norms:[])->None: - "Writes the average norm of the gradients to Tensorboard." - avg_norm = sum(norms)/len(self.gradients) - self._add_gradient_scalar('avg_norm', scalar_value=avg_norm) - - def _write_median_norm(self, norms:[])->None: - "Writes the median norm of the gradients to Tensorboard." - median_norm = statistics.median(norms) - self._add_gradient_scalar('median_norm', scalar_value=median_norm) - - def _write_max_norm(self, norms:[])->None: - "Writes the maximum norm of the gradients to Tensorboard." - max_norm = max(norms) - self._add_gradient_scalar('max_norm', scalar_value=max_norm) - - def _write_min_norm(self, norms:[])->None: - "Writes the minimum norm of the gradients to Tensorboard." - min_norm = min(norms) - self._add_gradient_scalar('min_norm', scalar_value=min_norm) - - def _write_num_zeros(self)->None: - "Writes the number of zeroes in the gradients to Tensorboard." - gradient_nps = [to_np(x.data) for x in self.gradients] - num_zeros = sum((np.asarray(x) == 0.0).sum() for x in gradient_nps) - self._add_gradient_scalar('num_zeros', scalar_value=num_zeros) - - def _write_avg_gradient(self)->None: - "Writes the average of the gradients to Tensorboard." - avg_gradient = sum(x.data.mean() for x in self.gradients)/len(self.gradients) - self._add_gradient_scalar('avg_gradient', scalar_value=avg_gradient) - - def _write_median_gradient(self)->None: - "Writes the median of the gradients to Tensorboard." - median_gradient = statistics.median(x.data.median() for x in self.gradients) - self._add_gradient_scalar('median_gradient', scalar_value=median_gradient) - - def _write_max_gradient(self)->None: - "Writes the maximum of the gradients to Tensorboard." - max_gradient = max(x.data.max() for x in self.gradients) - self._add_gradient_scalar('max_gradient', scalar_value=max_gradient) - - def _write_min_gradient(self)->None: - "Writes the minimum of the gradients to Tensorboard." - min_gradient = min(x.data.min() for x in self.gradients) - self._add_gradient_scalar('min_gradient', scalar_value=min_gradient) - - def write(self)->None: - "Writes model gradient statistics to Tensorboard." - if len(self.gradients) == 0: return - norms = [x.data.norm() for x in self.gradients] - self._write_avg_norm(norms=norms) - self._write_median_norm(norms=norms) - self._write_max_norm(norms=norms) - self._write_min_norm(norms=norms) - self._write_num_zeros() - self._write_avg_gradient() - self._write_median_gradient() - self._write_max_gradient() - self._write_min_gradient() - -class ModelStatsTBWriter(): - "Writes model gradient statistics to Tensorboard." - def write(self, model:nn.Module, iteration:int, tbwriter:SummaryWriter, name:str='model_stats')->None: - "Writes model gradient statistics to Tensorboard." - request = ModelStatsTBRequest(model=model, iteration=iteration, tbwriter=tbwriter, name=name) - asyncTBWriter.request_write(request) - -class ImageTBRequest(TBWriteRequest): - "Request object for model image output writes to Tensorboard." - def __init__(self, learn:Learner, batch:Tuple, iteration:int, tbwriter:SummaryWriter, ds_type:DatasetType): - super().__init__(tbwriter=tbwriter, iteration=iteration) - self.image_sets = ModelImageSet.get_list_from_model(learn=learn, batch=batch, ds_type=ds_type) - self.ds_type = ds_type - - def _write_images(self, name:str, images:[Tensor])->None: - "Writes list of images as tensors to Tensorboard." - tag = self.ds_type.name + ' ' + name - self.tbwriter.add_image(tag=tag, img_tensor=vutils.make_grid(images, normalize=True), global_step=self.iteration) - - def _get_image_tensors(self)->([Tensor], [Tensor], [Tensor]): - "Gets list of image tensors from lists of Image objects, as a tuple of original, generated and real(target) images." - orig_images, gen_images, real_images = [], [], [] - for image_set in self.image_sets: - orig_images.append(image_set.orig.px) - gen_images.append(image_set.gen.px) - real_images.append(image_set.real.px) - return orig_images, gen_images, real_images - - def write(self)->None: - "Writes original, generated and real(target) images to Tensorboard." - orig_images, gen_images, real_images = self._get_image_tensors() - self._write_images(name='orig images', images=orig_images) - self._write_images(name='gen images', images=gen_images) - self._write_images(name='real images', images=real_images) - -#If this isn't done async then this is noticeably slower -class ImageTBWriter(): - "Writes model image output to Tensorboard." - def __init__(self): super().__init__() - - def write(self, learn:Learner, trn_batch:Tuple, val_batch:Tuple, iteration:int, tbwriter:SummaryWriter)->None: - "Writes training and validation batch images to Tensorboard." - self._write_for_dstype(learn=learn, batch=val_batch, iteration=iteration, tbwriter=tbwriter, ds_type=DatasetType.Valid) - self._write_for_dstype(learn=learn, batch=trn_batch, iteration=iteration, tbwriter=tbwriter, ds_type=DatasetType.Train) - - def _write_for_dstype(self, learn:Learner, batch:Tuple, iteration:int, tbwriter:SummaryWriter, ds_type:DatasetType)->None: - "Writes batch images of specified DatasetType to Tensorboard." - request = ImageTBRequest(learn=learn, batch=batch, iteration=iteration, tbwriter=tbwriter, ds_type=ds_type) - asyncTBWriter.request_write(request) - -class GraphTBRequest(TBWriteRequest): - "Request object for model histogram writes to Tensorboard." - def __init__(self, model:nn.Module, tbwriter:SummaryWriter, input_to_model:torch.Tensor): - super().__init__(tbwriter=tbwriter, iteration=0) - self.model,self.input_to_model = model,input_to_model - - def write(self)->None: - "Writes single model graph to Tensorboard." - self.tbwriter.add_graph(model=self.model, input_to_model=self.input_to_model) - -class GraphTBWriter(): - "Writes model network graph to Tensorboard." - def write(self, model:nn.Module, tbwriter:SummaryWriter, input_to_model:torch.Tensor)->None: - "Writes model graph to Tensorboard." - request = GraphTBRequest(model=model, tbwriter=tbwriter, input_to_model=input_to_model) - asyncTBWriter.request_write(request) diff --git a/spaces/Xenova/doodle-dash/assets/worker-2da0101e.js b/spaces/Xenova/doodle-dash/assets/worker-2da0101e.js deleted file mode 100644 index 8d45185afc591e2fbda5485c1677950663fe49d4..0000000000000000000000000000000000000000 --- a/spaces/Xenova/doodle-dash/assets/worker-2da0101e.js +++ /dev/null @@ -1,1790 +0,0 @@ -var fn=Object.defineProperty;var gn=(nt,y,n)=>y in nt?fn(nt,y,{enumerable:!0,configurable:!0,writable:!0,value:n}):nt[y]=n;var Se=(nt,y,n)=>(gn(nt,typeof y!="symbol"?y+"":y,n),n);(function(){var nt;"use strict";function _mergeNamespaces(y,n){return n.forEach(function(a){a&&typeof a!="string"&&!Array.isArray(a)&&Object.keys(a).forEach(function(u){if(u!=="default"&&!(u in y)){var c=Object.getOwnPropertyDescriptor(a,u);Object.defineProperty(y,u,c.get?c:{enumerable:!0,get:function(){return a[u]}})}})}),Object.freeze(y)}function mobileTabletCheck(){let y=!1;return function(n){(/(android|bb\d+|meego).+mobile|avantgo|bada\/|blackberry|blazer|compal|elaine|fennec|hiptop|iemobile|ip(hone|od)|iris|kindle|lge |maemo|midp|mmp|mobile.+firefox|netfront|opera m(ob|in)i|palm( os)?|phone|p(ixi|re)\/|plucker|pocket|psp|series(4|6)0|symbian|treo|up\.(browser|link)|vodafone|wap|windows ce|xda|xiino|android|ipad|playbook|silk/i.test(n)||/1207|6310|6590|3gso|4thp|50[1-6]i|770s|802s|a wa|abac|ac(er|oo|s-)|ai(ko|rn)|al(av|ca|co)|amoi|an(ex|ny|yw)|aptu|ar(ch|go)|as(te|us)|attw|au(di|-m|r |s )|avan|be(ck|ll|nq)|bi(lb|rd)|bl(ac|az)|br(e|v)w|bumb|bw-(n|u)|c55\/|capi|ccwa|cdm-|cell|chtm|cldc|cmd-|co(mp|nd)|craw|da(it|ll|ng)|dbte|dc-s|devi|dica|dmob|do(c|p)o|ds(12|-d)|el(49|ai)|em(l2|ul)|er(ic|k0)|esl8|ez([4-7]0|os|wa|ze)|fetc|fly(-|_)|g1 u|g560|gene|gf-5|g-mo|go(\.w|od)|gr(ad|un)|haie|hcit|hd-(m|p|t)|hei-|hi(pt|ta)|hp( i|ip)|hs-c|ht(c(-| |_|a|g|p|s|t)|tp)|hu(aw|tc)|i-(20|go|ma)|i230|iac( |-|\/)|ibro|idea|ig01|ikom|im1k|inno|ipaq|iris|ja(t|v)a|jbro|jemu|jigs|kddi|keji|kgt( |\/)|klon|kpt |kwc-|kyo(c|k)|le(no|xi)|lg( g|\/(k|l|u)|50|54|-[a-w])|libw|lynx|m1-w|m3ga|m50\/|ma(te|ui|xo)|mc(01|21|ca)|m-cr|me(rc|ri)|mi(o8|oa|ts)|mmef|mo(01|02|bi|de|do|t(-| |o|v)|zz)|mt(50|p1|v )|mwbp|mywa|n10[0-2]|n20[2-3]|n30(0|2)|n50(0|2|5)|n7(0(0|1)|10)|ne((c|m)-|on|tf|wf|wg|wt)|nok(6|i)|nzph|o2im|op(ti|wv)|oran|owg1|p800|pan(a|d|t)|pdxg|pg(13|-([1-8]|c))|phil|pire|pl(ay|uc)|pn-2|po(ck|rt|se)|prox|psio|pt-g|qa-a|qc(07|12|21|32|60|-[2-7]|i-)|qtek|r380|r600|raks|rim9|ro(ve|zo)|s55\/|sa(ge|ma|mm|ms|ny|va)|sc(01|h-|oo|p-)|sdk\/|se(c(-|0|1)|47|mc|nd|ri)|sgh-|shar|sie(-|m)|sk-0|sl(45|id)|sm(al|ar|b3|it|t5)|so(ft|ny)|sp(01|h-|v-|v )|sy(01|mb)|t2(18|50)|t6(00|10|18)|ta(gt|lk)|tcl-|tdg-|tel(i|m)|tim-|t-mo|to(pl|sh)|ts(70|m-|m3|m5)|tx-9|up(\.b|g1|si)|utst|v400|v750|veri|vi(rg|te)|vk(40|5[0-3]|-v)|vm40|voda|vulc|vx(52|53|60|61|70|80|81|83|85|98)|w3c(-| )|webc|whit|wi(g |nc|nw)|wmlb|wonu|x700|yas-|your|zeto|zte-/i.test(n.substr(0,4)))&&(y=!0)}(navigator.userAgent||navigator.vendor||("opera"in window&&typeof window.opera=="string"?window.opera:"")),y}const IS_MOBILE=mobileTabletCheck();var constants={DEFAULT_MODEL:"quickdraw-mobilevit-small",DEFAULT_QUANTIZED:!1,BANNED_LABELS:["animal migration","arm","barn","bat","brain","coffee cup","circle","hexagon","stitches","sweather","van"],PREDICTION_REFRESH_TIME:10,BRUSH_SIZE:IS_MOBILE?12:16,TARGET_FPS:60,GAME_DURATION:60+.5,COUNTDOWN_TIMER:3,START_REJECT_THRESHOLD:.2,REJECT_TIME_DELAY:3*1e3,REJECT_TIME_PER_LABEL:3*1e3,SKIP_PENALTY:3*1e3,LABELS:{0:"aircraft carrier",1:"airplane",2:"alarm clock",3:"ambulance",4:"angel",5:"animal migration",6:"ant",7:"anvil",8:"apple",9:"arm",10:"asparagus",11:"axe",12:"backpack",13:"banana",14:"bandage",15:"barn",16:"baseball bat",17:"baseball",18:"basket",19:"basketball",20:"bat",21:"bathtub",22:"beach",23:"bear",24:"beard",25:"bed",26:"bee",27:"belt",28:"bench",29:"bicycle",30:"binoculars",31:"bird",32:"birthday cake",33:"blackberry",34:"blueberry",35:"book",36:"boomerang",37:"bottlecap",38:"bowtie",39:"bracelet",40:"brain",41:"bread",42:"bridge",43:"broccoli",44:"broom",45:"bucket",46:"bulldozer",47:"bus",48:"bush",49:"butterfly",50:"cactus",51:"cake",52:"calculator",53:"calendar",54:"camel",55:"camera",56:"camouflage",57:"campfire",58:"candle",59:"cannon",60:"canoe",61:"car",62:"carrot",63:"castle",64:"cat",65:"ceiling fan",66:"cell phone",67:"cello",68:"chair",69:"chandelier",70:"church",71:"circle",72:"clarinet",73:"clock",74:"cloud",75:"coffee cup",76:"compass",77:"computer",78:"cookie",79:"cooler",80:"couch",81:"cow",82:"crab",83:"crayon",84:"crocodile",85:"crown",86:"cruise ship",87:"cup",88:"diamond",89:"dishwasher",90:"diving board",91:"dog",92:"dolphin",93:"donut",94:"door",95:"dragon",96:"dresser",97:"drill",98:"drums",99:"duck",100:"dumbbell",101:"ear",102:"elbow",103:"elephant",104:"envelope",105:"eraser",106:"eye",107:"eyeglasses",108:"face",109:"fan",110:"feather",111:"fence",112:"finger",113:"fire hydrant",114:"fireplace",115:"firetruck",116:"fish",117:"flamingo",118:"flashlight",119:"flip flops",120:"floor lamp",121:"flower",122:"flying saucer",123:"foot",124:"fork",125:"frog",126:"frying pan",127:"garden hose",128:"garden",129:"giraffe",130:"goatee",131:"golf club",132:"grapes",133:"grass",134:"guitar",135:"hamburger",136:"hammer",137:"hand",138:"harp",139:"hat",140:"headphones",141:"hedgehog",142:"helicopter",143:"helmet",144:"hexagon",145:"hockey puck",146:"hockey stick",147:"horse",148:"hospital",149:"hot air balloon",150:"hot dog",151:"hot tub",152:"hourglass",153:"house plant",154:"house",155:"hurricane",156:"ice cream",157:"jacket",158:"jail",159:"kangaroo",160:"key",161:"keyboard",162:"knee",163:"knife",164:"ladder",165:"lantern",166:"laptop",167:"leaf",168:"leg",169:"light bulb",170:"lighter",171:"lighthouse",172:"lightning",173:"line",174:"lion",175:"lipstick",176:"lobster",177:"lollipop",178:"mailbox",179:"map",180:"marker",181:"matches",182:"megaphone",183:"mermaid",184:"microphone",185:"microwave",186:"monkey",187:"moon",188:"mosquito",189:"motorbike",190:"mountain",191:"mouse",192:"moustache",193:"mouth",194:"mug",195:"mushroom",196:"nail",197:"necklace",198:"nose",199:"ocean",200:"octagon",201:"octopus",202:"onion",203:"oven",204:"owl",205:"paint can",206:"paintbrush",207:"palm tree",208:"panda",209:"pants",210:"paper clip",211:"parachute",212:"parrot",213:"passport",214:"peanut",215:"pear",216:"peas",217:"pencil",218:"penguin",219:"piano",220:"pickup truck",221:"picture frame",222:"pig",223:"pillow",224:"pineapple",225:"pizza",226:"pliers",227:"police car",228:"pond",229:"pool",230:"popsicle",231:"postcard",232:"potato",233:"power outlet",234:"purse",235:"rabbit",236:"raccoon",237:"radio",238:"rain",239:"rainbow",240:"rake",241:"remote control",242:"rhinoceros",243:"rifle",244:"river",245:"roller coaster",246:"rollerskates",247:"sailboat",248:"sandwich",249:"saw",250:"saxophone",251:"school bus",252:"scissors",253:"scorpion",254:"screwdriver",255:"sea turtle",256:"see saw",257:"shark",258:"sheep",259:"shoe",260:"shorts",261:"shovel",262:"sink",263:"skateboard",264:"skull",265:"skyscraper",266:"sleeping bag",267:"smiley face",268:"snail",269:"snake",270:"snorkel",271:"snowflake",272:"snowman",273:"soccer ball",274:"sock",275:"speedboat",276:"spider",277:"spoon",278:"spreadsheet",279:"square",280:"squiggle",281:"squirrel",282:"stairs",283:"star",284:"steak",285:"stereo",286:"stethoscope",287:"stitches",288:"stop sign",289:"stove",290:"strawberry",291:"streetlight",292:"string bean",293:"submarine",294:"suitcase",295:"sun",296:"swan",297:"sweater",298:"swing set",299:"sword",300:"syringe",301:"t-shirt",302:"table",303:"teapot",304:"teddy-bear",305:"telephone",306:"television",307:"tennis racquet",308:"tent",309:"The Eiffel Tower",310:"The Great Wall of China",311:"The Mona Lisa",312:"tiger",313:"toaster",314:"toe",315:"toilet",316:"tooth",317:"toothbrush",318:"toothpaste",319:"tornado",320:"tractor",321:"traffic light",322:"train",323:"tree",324:"triangle",325:"trombone",326:"truck",327:"trumpet",328:"umbrella",329:"underwear",330:"van",331:"vase",332:"violin",333:"washing machine",334:"watermelon",335:"waterslide",336:"whale",337:"wheel",338:"windmill",339:"wine bottle",340:"wine glass",341:"wristwatch",342:"yoga",343:"zebra",344:"zigzag"}};function dispatchCallback(y,n){y!==null&&y(n)}function reverseDictionary(y){return Object.fromEntries(Object.entries(y).map(([n,a])=>[a,n]))}function escapeRegExp(y){return y.replace(/[.*+?^${}()|[\]\\]/g,"\\$&")}const Callable=class{constructor(){let y=function(...n){return y._call(...n)};return Object.setPrototypeOf(y,new.target.prototype)}_call(...y){throw Error("Must implement _call method in subclass")}};function isString(y){return typeof y=="string"||y instanceof String}function isTypedArray(y){var n,a,u;return((u=(a=(n=y==null?void 0:y.prototype)==null?void 0:n.__proto__)==null?void 0:a.constructor)==null?void 0:u.name)==="TypedArray"}function isIntegralNumber(y){return Number.isInteger(y)||typeof y=="bigint"}function exists(y){return y!=null}function calculateDimensions(y){const n=[];let a=y;for(;Array.isArray(a);)n.push(a.length),a=a[0];return n}function pop(y,n,a=void 0){const u=y[n];if(u!==void 0)return delete y[n],u;if(a===void 0)throw Error(`Key ${n} does not exist in object.`);return a}function mergeArrays(...y){return Array.prototype.concat.apply([],y)}var fs={},ONNX_NODE=Object.freeze({__proto__:null,default:fs});function getDefaultExportFromCjs(y){return y&&y.__esModule&&Object.prototype.hasOwnProperty.call(y,"default")?y.default:y}function getAugmentedNamespace(y){if(y.__esModule)return y;var n=y.default;if(typeof n=="function"){var a=function u(){if(this instanceof u){var c=[null];c.push.apply(c,arguments);var p=Function.bind.apply(n,c);return new p}return n.apply(this,arguments)};a.prototype=n.prototype}else a={};return Object.defineProperty(a,"__esModule",{value:!0}),Object.keys(y).forEach(function(u){var c=Object.getOwnPropertyDescriptor(y,u);Object.defineProperty(a,u,c.get?c:{enumerable:!0,get:function(){return y[u]}})}),a}var ortWeb_min$1={exports:{}};const backends={},backendsSortedByPriority=[],registerBackend=(y,n,a)=>{if(n&&typeof n.init=="function"&&typeof n.createSessionHandler=="function"){const u=backends[y];if(u===void 0)backends[y]={backend:n,priority:a};else{if(u.priority>a)return;if(u.priority===a&&u.backend!==n)throw new Error(`cannot register backend "${y}" using priority ${a}`)}if(a>=0){const c=backendsSortedByPriority.indexOf(y);c!==-1&&backendsSortedByPriority.splice(c,1);for(let p=0;p<backendsSortedByPriority.length;p++)if(backends[backendsSortedByPriority[p]].priority<=a){backendsSortedByPriority.splice(p,0,y);return}backendsSortedByPriority.push(y)}return}throw new TypeError("not a valid backend")},resolveBackend=async y=>{const n=y.length===0?backendsSortedByPriority:y,a=[];for(const u of n){const c=backends[u];if(c){if(c.initialized)return c.backend;if(c.aborted)continue;const p=!!c.initPromise;try{return p||(c.initPromise=c.backend.init()),await c.initPromise,c.initialized=!0,c.backend}catch(s){p||a.push({name:u,err:s}),c.aborted=!0}finally{delete c.initPromise}}}throw new Error(`no available backend found. ERR: ${a.map(u=>`[${u.name}] ${u.err}`).join(", ")}`)};class EnvImpl{constructor(){this.wasm={},this.webgl={},this.logLevelInternal="warning"}set logLevel(n){if(n!==void 0){if(typeof n!="string"||["verbose","info","warning","error","fatal"].indexOf(n)===-1)throw new Error(`Unsupported logging level: ${n}`);this.logLevelInternal=n}}get logLevel(){return this.logLevelInternal}}const env$1=new EnvImpl,isBigInt64ArrayAvailable=typeof BigInt64Array<"u"&&typeof BigInt64Array.from=="function",isBigUint64ArrayAvailable=typeof BigUint64Array<"u"&&typeof BigUint64Array.from=="function",NUMERIC_TENSOR_TYPE_TO_TYPEDARRAY_MAP=new Map([["float32",Float32Array],["uint8",Uint8Array],["int8",Int8Array],["uint16",Uint16Array],["int16",Int16Array],["int32",Int32Array],["bool",Uint8Array],["float64",Float64Array],["uint32",Uint32Array]]),NUMERIC_TENSOR_TYPEDARRAY_TO_TYPE_MAP=new Map([[Float32Array,"float32"],[Uint8Array,"uint8"],[Int8Array,"int8"],[Uint16Array,"uint16"],[Int16Array,"int16"],[Int32Array,"int32"],[Float64Array,"float64"],[Uint32Array,"uint32"]]);isBigInt64ArrayAvailable&&(NUMERIC_TENSOR_TYPE_TO_TYPEDARRAY_MAP.set("int64",BigInt64Array),NUMERIC_TENSOR_TYPEDARRAY_TO_TYPE_MAP.set(BigInt64Array,"int64")),isBigUint64ArrayAvailable&&(NUMERIC_TENSOR_TYPE_TO_TYPEDARRAY_MAP.set("uint64",BigUint64Array),NUMERIC_TENSOR_TYPEDARRAY_TO_TYPE_MAP.set(BigUint64Array,"uint64"));const calculateSize=y=>{let n=1;for(let a=0;a<y.length;a++){const u=y[a];if(typeof u!="number"||!Number.isSafeInteger(u))throw new TypeError(`dims[${a}] must be an integer, got: ${u}`);if(u<0)throw new RangeError(`dims[${a}] must be a non-negative integer, got: ${u}`);n*=u}return n};let Tensor$2=class ut{constructor(n,a,u){let c,p,s;if(typeof n=="string")if(c=n,s=u,n==="string"){if(!Array.isArray(a))throw new TypeError("A string tensor's data must be a string array.");p=a}else{const f=NUMERIC_TENSOR_TYPE_TO_TYPEDARRAY_MAP.get(n);if(f===void 0)throw new TypeError(`Unsupported tensor type: ${n}.`);if(Array.isArray(a))p=f.from(a);else if(a instanceof f)p=a;else throw new TypeError(`A ${c} tensor's data must be type of ${f}`)}else if(s=a,Array.isArray(n)){if(n.length===0)throw new TypeError("Tensor type cannot be inferred from an empty array.");const f=typeof n[0];if(f==="string")c="string",p=n;else if(f==="boolean")c="bool",p=Uint8Array.from(n);else throw new TypeError(`Invalid element type of data array: ${f}.`)}else{const f=NUMERIC_TENSOR_TYPEDARRAY_TO_TYPE_MAP.get(n.constructor);if(f===void 0)throw new TypeError(`Unsupported type for tensor data: ${n.constructor}.`);c=f,p=n}if(s===void 0)s=[p.length];else if(!Array.isArray(s))throw new TypeError("A tensor's dims must be a number array");const h=calculateSize(s);if(h!==p.length)throw new Error(`Tensor's size(${h}) does not match data length(${p.length}).`);this.dims=s,this.type=c,this.data=p,this.size=h}static bufferToTensor(n,a){if(n===void 0)throw new Error("Image buffer must be defined");if(a.height===void 0||a.width===void 0)throw new Error("Image height and width must be defined");const{height:u,width:c}=a,p=a.norm;let s,h;p===void 0||p.mean===void 0?s=255:s=p.mean,p===void 0||p.bias===void 0?h=0:h=p.bias;const f=a.bitmapFormat!==void 0?a.bitmapFormat:"RGBA",l=a.tensorFormat!==void 0&&a.tensorFormat!==void 0?a.tensorFormat:"RGB",o=u*c,t=l==="RGBA"?new Float32Array(o*4):new Float32Array(o*3);let e=4,r=0,i=1,d=2,g=3,m=0,b=o,_=o*2,v=-1;f==="RGB"&&(e=3,r=0,i=1,d=2,g=-1),l==="RGBA"?v=o*3:l==="RBG"?(m=0,_=o,b=o*2):l==="BGR"&&(_=0,b=o,m=o*2);for(let S=0;S<o;S++,r+=e,d+=e,i+=e,g+=e)t[m++]=(n[r]+h)/s,t[b++]=(n[i]+h)/s,t[_++]=(n[d]+h)/s,v!==-1&&g!==-1&&(t[v++]=(n[g]+h)/s);return l==="RGBA"?new ut("float32",t,[1,4,u,c]):new ut("float32",t,[1,3,u,c])}static async fromImage(n,a){const u=typeof HTMLImageElement<"u"&&n instanceof HTMLImageElement,c=typeof ImageData<"u"&&n instanceof ImageData,p=typeof ImageBitmap<"u"&&n instanceof ImageBitmap,s=typeof String<"u"&&(n instanceof String||typeof n=="string");let h,f={};if(u){const l=document.createElement("canvas"),o=l.getContext("2d");if(o!=null){let t=n.naturalHeight,e=n.naturalWidth;if(a!==void 0&&a.resizedHeight!==void 0&&a.resizedWidth!==void 0&&(t=a.resizedHeight,e=a.resizedWidth),a!==void 0){if(f=a,a.tensorFormat!==void 0)throw new Error("Image input config format must be RGBA for HTMLImageElement");if(f.tensorFormat="RGBA",a.height!==void 0&&a.height!==t)throw new Error("Image input config height doesn't match HTMLImageElement height");if(f.height=t,a.width!==void 0&&a.width!==e)throw new Error("Image input config width doesn't match HTMLImageElement width");f.width=e}else f.tensorFormat="RGBA",f.height=t,f.width=e;l.width=e,l.height=t,o.drawImage(n,0,0,e,t),h=o.getImageData(0,0,e,t).data}else throw new Error("Can not access image data")}else if(c){const l="RGBA";let o,t;if(a!==void 0&&a.resizedWidth!==void 0&&a.resizedHeight!==void 0?(o=a.resizedHeight,t=a.resizedWidth):(o=n.height,t=n.width),a!==void 0){if(f=a,a.bitmapFormat!==void 0&&a.bitmapFormat!==l)throw new Error("Image input config format must be RGBA for ImageData");f.bitmapFormat="RGBA"}else f.bitmapFormat="RGBA";if(f.height=o,f.width=t,a!==void 0){const e=document.createElement("canvas");e.width=t,e.height=o;const r=e.getContext("2d");if(r!=null)r.putImageData(n,0,0),h=r.getImageData(0,0,t,o).data;else throw new Error("Can not access image data")}else h=n.data}else if(p){if(a===void 0)throw new Error("Please provide image config with format for Imagebitmap");if(a.bitmapFormat!==void 0)throw new Error("Image input config format must be defined for ImageBitmap");const l=document.createElement("canvas").getContext("2d");if(l!=null){const o=n.height,t=n.width;if(l.drawImage(n,0,0,t,o),h=l.getImageData(0,0,t,o).data,a!==void 0){if(a.height!==void 0&&a.height!==o)throw new Error("Image input config height doesn't match ImageBitmap height");if(f.height=o,a.width!==void 0&&a.width!==t)throw new Error("Image input config width doesn't match ImageBitmap width");f.width=t}else f.height=o,f.width=t;return ut.bufferToTensor(h,f)}else throw new Error("Can not access image data")}else{if(s)return new Promise((l,o)=>{const t=document.createElement("canvas"),e=t.getContext("2d");if(!n||!e)return o();const r=new Image;r.crossOrigin="Anonymous",r.src=n,r.onload=()=>{t.width=r.width,t.height=r.height,e.drawImage(r,0,0,t.width,t.height);const i=e.getImageData(0,0,t.width,t.height);if(a!==void 0){if(a.height!==void 0&&a.height!==t.height)throw new Error("Image input config height doesn't match ImageBitmap height");if(f.height=t.height,a.width!==void 0&&a.width!==t.width)throw new Error("Image input config width doesn't match ImageBitmap width");f.width=t.width}else f.height=t.height,f.width=t.width;l(ut.bufferToTensor(i.data,f))}});throw new Error("Input data provided is not supported - aborted tensor creation")}if(h!==void 0)return ut.bufferToTensor(h,f);throw new Error("Input data provided is not supported - aborted tensor creation")}toImageData(n){var a,u;const c=document.createElement("canvas").getContext("2d");let p;if(c!=null){const s=this.dims[3],h=this.dims[2],f=this.dims[1],l=n!==void 0&&n.format!==void 0?n.format:"RGB",o=n!==void 0&&((a=n.norm)===null||a===void 0?void 0:a.mean)!==void 0?n.norm.mean:255,t=n!==void 0&&((u=n.norm)===null||u===void 0?void 0:u.bias)!==void 0?n.norm.bias:0,e=h*s;if(n!==void 0){if(n.height!==void 0&&n.height!==h)throw new Error("Image output config height doesn't match tensor height");if(n.width!==void 0&&n.width!==s)throw new Error("Image output config width doesn't match tensor width");if(n.format!==void 0&&f===4&&n.format!=="RGBA"||f===3&&n.format!=="RGB"&&n.format!=="BGR")throw new Error("Tensor format doesn't match input tensor dims")}const r=4;let i=0,d=1,g=2,m=3,b=0,_=e,v=e*2,w=-1;l==="RGBA"?(b=0,_=e,v=e*2,w=e*3):l==="RGB"?(b=0,_=e,v=e*2):l==="RBG"&&(b=0,v=e,_=e*2),p=c.createImageData(s,h);for(let S=0;S<h*s;i+=r,d+=r,g+=r,m+=r,S++)p.data[i]=(this.data[b++]-t)*o,p.data[d]=(this.data[_++]-t)*o,p.data[g]=(this.data[v++]-t)*o,p.data[m]=w===-1?255:(this.data[w++]-t)*o}else throw new Error("Can not access image data");return p}reshape(n){return new ut(this.type,this.data,n)}};const Tensor$1=Tensor$2;let InferenceSession$2=class dn{constructor(n){this.handler=n}async run(n,a,u){const c={};let p={};if(typeof n!="object"||n===null||n instanceof Tensor$1||Array.isArray(n))throw new TypeError("'feeds' must be an object that use input names as keys and OnnxValue as corresponding values.");let s=!0;if(typeof a=="object"){if(a===null)throw new TypeError("Unexpected argument[1]: cannot be null.");if(a instanceof Tensor$1)throw new TypeError("'fetches' cannot be a Tensor");if(Array.isArray(a)){if(a.length===0)throw new TypeError("'fetches' cannot be an empty array.");s=!1;for(const l of a){if(typeof l!="string")throw new TypeError("'fetches' must be a string array or an object.");if(this.outputNames.indexOf(l)===-1)throw new RangeError(`'fetches' contains invalid output name: ${l}.`);c[l]=null}if(typeof u=="object"&&u!==null)p=u;else if(typeof u<"u")throw new TypeError("'options' must be an object.")}else{let l=!1;const o=Object.getOwnPropertyNames(a);for(const t of this.outputNames)if(o.indexOf(t)!==-1){const e=a[t];(e===null||e instanceof Tensor$1)&&(l=!0,s=!1,c[t]=e)}if(l){if(typeof u=="object"&&u!==null)p=u;else if(typeof u<"u")throw new TypeError("'options' must be an object.")}else p=a}}else if(typeof a<"u")throw new TypeError("Unexpected argument[1]: must be 'fetches' or 'options'.");for(const l of this.inputNames)if(typeof n[l]>"u")throw new Error(`input '${l}' is missing in 'feeds'.`);if(s)for(const l of this.outputNames)c[l]=null;const h=await this.handler.run(n,c,p),f={};for(const l in h)Object.hasOwnProperty.call(h,l)&&(f[l]=new Tensor$1(h[l].type,h[l].data,h[l].dims));return f}static async create(n,a,u,c){let p,s={};if(typeof n=="string"){if(p=n,typeof a=="object"&&a!==null)s=a;else if(typeof a<"u")throw new TypeError("'options' must be an object.")}else if(n instanceof Uint8Array){if(p=n,typeof a=="object"&&a!==null)s=a;else if(typeof a<"u")throw new TypeError("'options' must be an object.")}else if(n instanceof ArrayBuffer||typeof SharedArrayBuffer<"u"&&n instanceof SharedArrayBuffer){const t=n;let e=0,r=n.byteLength;if(typeof a=="object"&&a!==null)s=a;else if(typeof a=="number"){if(e=a,!Number.isSafeInteger(e))throw new RangeError("'byteOffset' must be an integer.");if(e<0||e>=t.byteLength)throw new RangeError(`'byteOffset' is out of range [0, ${t.byteLength}).`);if(r=n.byteLength-e,typeof u=="number"){if(r=u,!Number.isSafeInteger(r))throw new RangeError("'byteLength' must be an integer.");if(r<=0||e+r>t.byteLength)throw new RangeError(`'byteLength' is out of range (0, ${t.byteLength-e}].`);if(typeof c=="object"&&c!==null)s=c;else if(typeof c<"u")throw new TypeError("'options' must be an object.")}else if(typeof u<"u")throw new TypeError("'byteLength' must be a number.")}else if(typeof a<"u")throw new TypeError("'options' must be an object.");p=new Uint8Array(t,e,r)}else throw new TypeError("Unexpected argument[0]: must be 'path' or 'buffer'.");const f=(s.executionProviders||[]).map(t=>typeof t=="string"?t:t.name),o=await(await resolveBackend(f)).createSessionHandler(p,s);return new dn(o)}startProfiling(){this.handler.startProfiling()}endProfiling(){this.handler.endProfiling()}get inputNames(){return this.handler.inputNames}get outputNames(){return this.handler.outputNames}};const InferenceSession$1=InferenceSession$2;var lib=Object.freeze({__proto__:null,InferenceSession:InferenceSession$1,Tensor:Tensor$1,env:env$1,registerBackend}),require$$0=getAugmentedNamespace(lib);/*! -* ONNX Runtime Web v1.14.0 -* Copyright (c) Microsoft Corporation. All rights reserved. -* Licensed under the MIT License. -*/(function(module,exports){(function(y,n){module.exports=n(require$$0)})(self,__WEBPACK_EXTERNAL_MODULE__1670__=>(()=>{var __webpack_modules__={3474:(y,n,a)=>{var u,c=(u=(u=typeof document<"u"&&document.currentScript?document.currentScript.src:void 0)||"/index.js",function(p){function s(){return X.buffer!=ee&&Ee(X.buffer),ue}function h(){return X.buffer!=ee&&Ee(X.buffer),Ae}function f(){return X.buffer!=ee&&Ee(X.buffer),ve}function l(){return X.buffer!=ee&&Ee(X.buffer),oe}function o(){return X.buffer!=ee&&Ee(X.buffer),_e}var t,e,r;p=p||{},t||(t=p!==void 0?p:{}),t.ready=new Promise(function(T,E){e=T,r=E});var i,d,g,m,b,_,v=Object.assign({},t),w="./this.program",S=(T,E)=>{throw E},A=typeof window=="object",O=typeof importScripts=="function",x=typeof process=="object"&&typeof process.versions=="object"&&typeof process.versions.node=="string",I=t.ENVIRONMENT_IS_PTHREAD||!1,N="";function B(T){return t.locateFile?t.locateFile(T,N):N+T}if(x){let T;N=O?a(908).dirname(N)+"/":"//",_=()=>{b||(m=a(1384),b=a(908))},i=function(E,k){return _(),E=b.normalize(E),m.readFileSync(E,k?void 0:"utf8")},g=E=>((E=i(E,!0)).buffer||(E=new Uint8Array(E)),E),d=(E,k,C)=>{_(),E=b.normalize(E),m.readFile(E,function(z,G){z?C(z):k(G.buffer)})},1<process.argv.length&&(w=process.argv[1].replace(/\\/g,"/")),process.argv.slice(2),process.on("uncaughtException",function(E){if(!(E instanceof Qe))throw E}),process.on("unhandledRejection",function(E){throw E}),S=(E,k)=>{if(qe())throw process.exitCode=E,k;k instanceof Qe||j("exiting due to exception: "+k),process.exit(E)},t.inspect=function(){return"[Emscripten Module object]"};try{T=a(9925)}catch(E){throw console.error('The "worker_threads" module is not supported in this node.js build - perhaps a newer version is needed?'),E}a.g.Worker=T.Worker}else(A||O)&&(O?N=self.location.href:typeof document<"u"&&document.currentScript&&(N=document.currentScript.src),u&&(N=u),N=N.indexOf("blob:")!==0?N.substr(0,N.replace(/[?#].*/,"").lastIndexOf("/")+1):"",x||(i=T=>{var E=new XMLHttpRequest;return E.open("GET",T,!1),E.send(null),E.responseText},O&&(g=T=>{var E=new XMLHttpRequest;return E.open("GET",T,!1),E.responseType="arraybuffer",E.send(null),new Uint8Array(E.response)}),d=(T,E,k)=>{var C=new XMLHttpRequest;C.open("GET",T,!0),C.responseType="arraybuffer",C.onload=()=>{C.status==200||C.status==0&&C.response?E(C.response):k()},C.onerror=k,C.send(null)}));x&&typeof performance>"u"&&(a.g.performance=a(6953).performance);var L=console.log.bind(console),F=console.warn.bind(console);x&&(_(),L=T=>m.writeSync(1,T+` -`),F=T=>m.writeSync(2,T+` -`));var H,D=t.print||L,j=t.printErr||F;Object.assign(t,v),v=null,t.thisProgram&&(w=t.thisProgram),t.quit&&(S=t.quit),t.wasmBinary&&(H=t.wasmBinary);var Z=t.noExitRuntime||!1;typeof WebAssembly!="object"&&fe("no native wasm support detected");var X,J,ee,ue,Ae,ve,oe,_e,be=!1,ke=typeof TextDecoder<"u"?new TextDecoder("utf8"):void 0;function Fe(T,E,k){var C=(E>>>=0)+k;for(k=E;T[k]&&!(k>=C);)++k;if(16<k-E&&T.buffer&&ke)return ke.decode(T.buffer instanceof SharedArrayBuffer?T.slice(E,k):T.subarray(E,k));for(C="";E<k;){var z=T[E++];if(128&z){var G=63&T[E++];if((224&z)==192)C+=String.fromCharCode((31&z)<<6|G);else{var K=63&T[E++];65536>(z=(240&z)==224?(15&z)<<12|G<<6|K:(7&z)<<18|G<<12|K<<6|63&T[E++])?C+=String.fromCharCode(z):(z-=65536,C+=String.fromCharCode(55296|z>>10,56320|1023&z))}}else C+=String.fromCharCode(z)}return C}function xe(T,E){return(T>>>=0)?Fe(h(),T,E):""}function Ne(T,E,k,C){if(!(0<C))return 0;var z=k>>>=0;C=k+C-1;for(var G=0;G<T.length;++G){var K=T.charCodeAt(G);if(55296<=K&&57343>=K&&(K=65536+((1023&K)<<10)|1023&T.charCodeAt(++G)),127>=K){if(k>=C)break;E[k++>>>0]=K}else{if(2047>=K){if(k+1>=C)break;E[k++>>>0]=192|K>>6}else{if(65535>=K){if(k+2>=C)break;E[k++>>>0]=224|K>>12}else{if(k+3>=C)break;E[k++>>>0]=240|K>>18,E[k++>>>0]=128|K>>12&63}E[k++>>>0]=128|K>>6&63}E[k++>>>0]=128|63&K}}return E[k>>>0]=0,k-z}function Ce(T){for(var E=0,k=0;k<T.length;++k){var C=T.charCodeAt(k);127>=C?E++:2047>=C?E+=2:55296<=C&&57343>=C?(E+=4,++k):E+=3}return E}function Ee(T){ee=T,t.HEAP8=ue=new Int8Array(T),t.HEAP16=new Int16Array(T),t.HEAP32=ve=new Int32Array(T),t.HEAPU8=Ae=new Uint8Array(T),t.HEAPU16=new Uint16Array(T),t.HEAPU32=oe=new Uint32Array(T),t.HEAPF32=new Float32Array(T),t.HEAPF64=_e=new Float64Array(T)}I&&(ee=t.buffer);var Oe=t.INITIAL_MEMORY||16777216;if(I)X=t.wasmMemory,ee=t.buffer;else if(t.wasmMemory)X=t.wasmMemory;else if(!((X=new WebAssembly.Memory({initial:Oe/65536,maximum:65536,shared:!0})).buffer instanceof SharedArrayBuffer))throw j("requested a shared WebAssembly.Memory but the returned buffer is not a SharedArrayBuffer, indicating that while the browser has SharedArrayBuffer it does not have WebAssembly threads support - you may need to set a flag"),x&&console.log("(on node you may need: --experimental-wasm-threads --experimental-wasm-bulk-memory and also use a recent version)"),Error("bad memory");X&&(ee=X.buffer),Oe=ee.byteLength,Ee(ee);var Be,Ge=[],Ve=[],Xe=[],Ze=[];function qe(){return Z||!1}function Ue(){var T=t.preRun.shift();Ge.unshift(T)}var Ie,je=0,Ye=null;function fe(T){throw I?postMessage({cmd:"onAbort",arg:T}):t.onAbort&&t.onAbort(T),j(T="Aborted("+T+")"),be=!0,T=new WebAssembly.RuntimeError(T+". Build with -sASSERTIONS for more info."),r(T),T}function pt(){return Ie.startsWith("data:application/octet-stream;base64,")}function lt(){var T=Ie;try{if(T==Ie&&H)return new Uint8Array(H);if(g)return g(T);throw"both async and sync fetching of the wasm failed"}catch(E){fe(E)}}Ie="ort-wasm-threaded.wasm",pt()||(Ie=B(Ie));var Pt={};function Qe(T){this.name="ExitStatus",this.message="Program terminated with exit("+T+")",this.status=T}function ct(T){(T=re.Vb[T])||fe(),re.mc(T)}function dt(T){var E=re.Cc();if(!E)return 6;re.ac.push(E),re.Vb[T.Ub]=E,E.Ub=T.Ub;var k={cmd:"run",start_routine:T.Ic,arg:T.zc,pthread_ptr:T.Ub};return E.$b=()=>{k.time=performance.now(),E.postMessage(k,T.Nc)},E.loaded&&(E.$b(),delete E.$b),0}function Re(T){if(I)return Q(1,1,T);qe()||(re.oc(),t.onExit&&t.onExit(T),be=!0),S(T,new Qe(T))}function it(T,E){if(!E&&I)throw kt(T),"unwind";qe()||I||(Wt(),rt(Xe),qt(0),$t[1].length&&Ft(1,10),$t[2].length&&Ft(2,10),re.oc()),Re(T)}var re={Yb:[],ac:[],qc:[],Vb:{},fc:function(){I&&re.Ec()},Pc:function(){},Ec:function(){re.receiveObjectTransfer=re.Gc,re.threadInitTLS=re.pc,re.setExitStatus=re.nc,Z=!1},nc:function(){},oc:function(){for(var T of Object.values(re.Vb))re.mc(T);for(T of re.Yb)T.terminate();re.Yb=[]},mc:function(T){var E=T.Ub;delete re.Vb[E],re.Yb.push(T),re.ac.splice(re.ac.indexOf(T),1),T.Ub=0,Rt(E)},Gc:function(){},pc:function(){re.qc.forEach(T=>T())},Fc:function(T,E){T.onmessage=k=>{var C=(k=k.data).cmd;if(T.Ub&&(re.Bc=T.Ub),k.targetThread&&k.targetThread!=Dt()){var z=re.Vb[k.Qc];z?z.postMessage(k,k.transferList):j('Internal error! Worker sent a message "'+C+'" to target pthread '+k.targetThread+", but that thread no longer exists!")}else C==="processProxyingQueue"?$(k.queue):C==="spawnThread"?dt(k):C==="cleanupThread"?ct(k.thread):C==="killThread"?(k=k.thread,C=re.Vb[k],delete re.Vb[k],C.terminate(),Rt(k),re.ac.splice(re.ac.indexOf(C),1),C.Ub=0):C==="cancelThread"?re.Vb[k.thread].postMessage({cmd:"cancel"}):C==="loaded"?(T.loaded=!0,E&&E(T),T.$b&&(T.$b(),delete T.$b)):C==="print"?D("Thread "+k.threadId+": "+k.text):C==="printErr"?j("Thread "+k.threadId+": "+k.text):C==="alert"?alert("Thread "+k.threadId+": "+k.text):k.target==="setimmediate"?T.postMessage(k):C==="onAbort"?t.onAbort&&t.onAbort(k.arg):C&&j("worker sent an unknown command "+C);re.Bc=void 0},T.onerror=k=>{throw j("worker sent an error! "+k.filename+":"+k.lineno+": "+k.message),k},x&&(T.on("message",function(k){T.onmessage({data:k})}),T.on("error",function(k){T.onerror(k)}),T.on("detachedExit",function(){})),T.postMessage({cmd:"load",urlOrBlob:t.mainScriptUrlOrBlob||u,wasmMemory:X,wasmModule:J})},yc:function(){var T=B("ort-wasm-threaded.worker.js");re.Yb.push(new Worker(T))},Cc:function(){return re.Yb.length==0&&(re.yc(),re.Fc(re.Yb[0])),re.Yb.pop()}};function rt(T){for(;0<T.length;)T.shift()(t)}function It(T){var E=de();return T=T(),ce(E),T}function kt(T){if(I)return Q(2,0,T);try{it(T)}catch(E){E instanceof Qe||E=="unwind"||S(1,E)}}t.PThread=re,t.establishStackSpace=function(){var T=Dt(),E=f()[T+44>>2>>>0];T=f()[T+48>>2>>>0],Zt(E,E-T),ce(E)};var Je=[];function we(T){var E=Je[T];return E||(T>=Je.length&&(Je.length=T+1),Je[T]=E=Be.get(T)),E}t.invokeEntryPoint=function(T,E){T=we(T)(E),qe()?re.nc(T):Kt(T)};var ot,ft,st=[],ae=0,ie=0;function se(T){this.Zb=T,this.Sb=T-24,this.xc=function(E){l()[this.Sb+4>>2>>>0]=E},this.bc=function(){return l()[this.Sb+4>>2>>>0]},this.wc=function(E){l()[this.Sb+8>>2>>>0]=E},this.Dc=function(){return l()[this.Sb+8>>2>>>0]},this.rc=function(){f()[this.Sb>>2>>>0]=0},this.hc=function(E){E=E?1:0,s()[this.Sb+12>>0>>>0]=E},this.uc=function(){return s()[this.Sb+12>>0>>>0]!=0},this.ic=function(E){E=E?1:0,s()[this.Sb+13>>0>>>0]=E},this.kc=function(){return s()[this.Sb+13>>0>>>0]!=0},this.fc=function(E,k){this.cc(0),this.xc(E),this.wc(k),this.rc(),this.hc(!1),this.ic(!1)},this.sc=function(){Atomics.add(f(),this.Sb>>2,1)},this.Hc=function(){return Atomics.sub(f(),this.Sb>>2,1)===1},this.cc=function(E){l()[this.Sb+16>>2>>>0]=E},this.tc=function(){return l()[this.Sb+16>>2>>>0]},this.vc=function(){if(Qt(this.bc()))return l()[this.Zb>>2>>>0];var E=this.tc();return E!==0?E:this.Zb}}function gt(T){return Vt(new se(T).Sb)}function at(T,E,k,C){return I?Q(3,1,T,E,k,C):mt(T,E,k,C)}function mt(T,E,k,C){if(typeof SharedArrayBuffer>"u")return j("Current environment does not support SharedArrayBuffer, pthreads are not available!"),6;var z=[];return I&&z.length===0?at(T,E,k,C):(T={Ic:k,Ub:T,zc:C,Nc:z},I?(T.Oc="spawnThread",postMessage(T,z),0):dt(T))}function bt(T,E,k){return I?Q(4,1,T,E,k):0}function yt(T,E){if(I)return Q(5,1,T,E)}function _t(T,E){if(I)return Q(6,1,T,E)}function wt(T,E,k){if(I)return Q(7,1,T,E,k)}function vt(T,E,k){return I?Q(8,1,T,E,k):0}function xt(T,E){if(I)return Q(9,1,T,E)}function Tt(T,E,k){if(I)return Q(10,1,T,E,k)}function St(T,E,k,C){if(I)return Q(11,1,T,E,k,C)}function At(T,E,k,C){if(I)return Q(12,1,T,E,k,C)}function Ot(T,E,k,C){if(I)return Q(13,1,T,E,k,C)}function Et(T){if(I)return Q(14,1,T)}function P(T,E){if(I)return Q(15,1,T,E)}function M(T,E,k){if(I)return Q(16,1,T,E,k)}function $(T){Atomics.store(f(),T>>2,1),Dt()&&Yt(T),Atomics.compareExchange(f(),T>>2,1,0)}function R(T){return l()[T>>>2]+4294967296*f()[T+4>>>2]}function U(T,E,k,C,z,G){return I?Q(17,1,T,E,k,C,z,G):-52}function W(T,E,k,C,z,G){if(I)return Q(18,1,T,E,k,C,z,G)}function Y(T){var E=Ce(T)+1,k=Lt(E);return k&&Ne(T,s(),k,E),k}function te(T,E,k){function C(ge){return(ge=ge.toTimeString().match(/\(([A-Za-z ]+)\)$/))?ge[1]:"GMT"}if(I)return Q(19,1,T,E,k);var z=new Date().getFullYear(),G=new Date(z,0,1),K=new Date(z,6,1);z=G.getTimezoneOffset();var ne=K.getTimezoneOffset(),pe=Math.max(z,ne);f()[T>>2>>>0]=60*pe,f()[E>>2>>>0]=+(z!=ne),T=C(G),E=C(K),T=Y(T),E=Y(E),ne<z?(l()[k>>2>>>0]=T,l()[k+4>>2>>>0]=E):(l()[k>>2>>>0]=E,l()[k+4>>2>>>0]=T)}function Q(T,E){var k=arguments.length-2,C=arguments;return It(()=>{for(var z=jt(8*k),G=z>>3,K=0;K<k;K++){var ne=C[2+K];o()[G+K>>>0]=ne}return Xt(T,k,z,E)})}t.executeNotifiedProxyingQueue=$,ft=x?()=>{var T=process.hrtime();return 1e3*T[0]+T[1]/1e6}:I?()=>performance.now()-t.__performance_now_clock_drift:()=>performance.now();var le,Te=[],Le={};function $e(){if(!le){var T,E={USER:"web_user",LOGNAME:"web_user",PATH:"/",PWD:"/",HOME:"/home/web_user",LANG:(typeof navigator=="object"&&navigator.languages&&navigator.languages[0]||"C").replace("-","_")+".UTF-8",_:w||"./this.program"};for(T in Le)Le[T]===void 0?delete E[T]:E[T]=Le[T];var k=[];for(T in E)k.push(T+"="+E[T]);le=k}return le}function V(T,E){if(I)return Q(20,1,T,E);var k=0;return $e().forEach(function(C,z){var G=E+k;for(z=l()[T+4*z>>2>>>0]=G,G=0;G<C.length;++G)s()[z++>>0>>>0]=C.charCodeAt(G);s()[z>>0>>>0]=0,k+=C.length+1}),0}function me(T,E){if(I)return Q(21,1,T,E);var k=$e();l()[T>>2>>>0]=k.length;var C=0;return k.forEach(function(z){C+=z.length+1}),l()[E>>2>>>0]=C,0}function Pe(T){return I?Q(22,1,T):52}function We(T,E,k,C){return I?Q(23,1,T,E,k,C):52}function et(T,E,k,C,z){return I?Q(24,1,T,E,k,C,z):70}var $t=[null,[],[]];function Ft(T,E){var k=$t[T];E===0||E===10?((T===1?D:j)(Fe(k,0)),k.length=0):k.push(E)}function zt(T,E,k,C){if(I)return Q(25,1,T,E,k,C);for(var z=0,G=0;G<k;G++){var K=l()[E>>2>>>0],ne=l()[E+4>>2>>>0];E+=8;for(var pe=0;pe<ne;pe++)Ft(T,h()[K+pe>>>0]);z+=ne}return l()[C>>2>>>0]=z,0}var ze=0;function Mt(T){return T%4==0&&(T%100!=0||T%400==0)}var Bt=[31,29,31,30,31,30,31,31,30,31,30,31],Ut=[31,28,31,30,31,30,31,31,30,31,30,31];function Gt(T,E,k,C){function z(q,ye,Me){for(q=typeof q=="number"?q.toString():q||"";q.length<ye;)q=Me[0]+q;return q}function G(q,ye){return z(q,ye,"0")}function K(q,ye){function Me(ht){return 0>ht?-1:0<ht?1:0}var tt;return(tt=Me(q.getFullYear()-ye.getFullYear()))===0&&(tt=Me(q.getMonth()-ye.getMonth()))===0&&(tt=Me(q.getDate()-ye.getDate())),tt}function ne(q){switch(q.getDay()){case 0:return new Date(q.getFullYear()-1,11,29);case 1:return q;case 2:return new Date(q.getFullYear(),0,3);case 3:return new Date(q.getFullYear(),0,2);case 4:return new Date(q.getFullYear(),0,1);case 5:return new Date(q.getFullYear()-1,11,31);case 6:return new Date(q.getFullYear()-1,11,30)}}function pe(q){var ye=q.Wb;for(q=new Date(new Date(q.Xb+1900,0,1).getTime());0<ye;){var Me=q.getMonth(),tt=(Mt(q.getFullYear())?Bt:Ut)[Me];if(!(ye>tt-q.getDate())){q.setDate(q.getDate()+ye);break}ye-=tt-q.getDate()+1,q.setDate(1),11>Me?q.setMonth(Me+1):(q.setMonth(0),q.setFullYear(q.getFullYear()+1))}return Me=new Date(q.getFullYear()+1,0,4),ye=ne(new Date(q.getFullYear(),0,4)),Me=ne(Me),0>=K(ye,q)?0>=K(Me,q)?q.getFullYear()+1:q.getFullYear():q.getFullYear()-1}var ge=f()[C+40>>2>>>0];for(var De in C={Lc:f()[C>>2>>>0],Kc:f()[C+4>>2>>>0],dc:f()[C+8>>2>>>0],jc:f()[C+12>>2>>>0],ec:f()[C+16>>2>>>0],Xb:f()[C+20>>2>>>0],Tb:f()[C+24>>2>>>0],Wb:f()[C+28>>2>>>0],Rc:f()[C+32>>2>>>0],Jc:f()[C+36>>2>>>0],Mc:ge?xe(ge):""},k=xe(k),ge={"%c":"%a %b %d %H:%M:%S %Y","%D":"%m/%d/%y","%F":"%Y-%m-%d","%h":"%b","%r":"%I:%M:%S %p","%R":"%H:%M","%T":"%H:%M:%S","%x":"%m/%d/%y","%X":"%H:%M:%S","%Ec":"%c","%EC":"%C","%Ex":"%m/%d/%y","%EX":"%H:%M:%S","%Ey":"%y","%EY":"%Y","%Od":"%d","%Oe":"%e","%OH":"%H","%OI":"%I","%Om":"%m","%OM":"%M","%OS":"%S","%Ou":"%u","%OU":"%U","%OV":"%V","%Ow":"%w","%OW":"%W","%Oy":"%y"})k=k.replace(new RegExp(De,"g"),ge[De]);var Ke="Sunday Monday Tuesday Wednesday Thursday Friday Saturday".split(" "),He="January February March April May June July August September October November December".split(" ");for(De in ge={"%a":function(q){return Ke[q.Tb].substring(0,3)},"%A":function(q){return Ke[q.Tb]},"%b":function(q){return He[q.ec].substring(0,3)},"%B":function(q){return He[q.ec]},"%C":function(q){return G((q.Xb+1900)/100|0,2)},"%d":function(q){return G(q.jc,2)},"%e":function(q){return z(q.jc,2," ")},"%g":function(q){return pe(q).toString().substring(2)},"%G":function(q){return pe(q)},"%H":function(q){return G(q.dc,2)},"%I":function(q){return(q=q.dc)==0?q=12:12<q&&(q-=12),G(q,2)},"%j":function(q){for(var ye=0,Me=0;Me<=q.ec-1;ye+=(Mt(q.Xb+1900)?Bt:Ut)[Me++]);return G(q.jc+ye,3)},"%m":function(q){return G(q.ec+1,2)},"%M":function(q){return G(q.Kc,2)},"%n":function(){return` -`},"%p":function(q){return 0<=q.dc&&12>q.dc?"AM":"PM"},"%S":function(q){return G(q.Lc,2)},"%t":function(){return" "},"%u":function(q){return q.Tb||7},"%U":function(q){return G(Math.floor((q.Wb+7-q.Tb)/7),2)},"%V":function(q){var ye=Math.floor((q.Wb+7-(q.Tb+6)%7)/7);if(2>=(q.Tb+371-q.Wb-2)%7&&ye++,ye)ye==53&&((Me=(q.Tb+371-q.Wb)%7)==4||Me==3&&Mt(q.Xb)||(ye=1));else{ye=52;var Me=(q.Tb+7-q.Wb-1)%7;(Me==4||Me==5&&Mt(q.Xb%400-1))&&ye++}return G(ye,2)},"%w":function(q){return q.Tb},"%W":function(q){return G(Math.floor((q.Wb+7-(q.Tb+6)%7)/7),2)},"%y":function(q){return(q.Xb+1900).toString().substring(2)},"%Y":function(q){return q.Xb+1900},"%z":function(q){var ye=0<=(q=q.Jc);return q=Math.abs(q)/60,(ye?"+":"-")+("0000"+(q/60*100+q%60)).slice(-4)},"%Z":function(q){return q.Mc},"%%":function(){return"%"}},k=k.replace(/%%/g,"\0\0"),ge)k.includes(De)&&(k=k.replace(new RegExp(De,"g"),ge[De](C)));return De=function(q){var ye=Array(Ce(q)+1);return Ne(q,ye,0,ye.length),ye}(k=k.replace(/\0\0/g,"%")),De.length>E?0:(function(q,ye){s().set(q,ye>>>0)}(De,T),De.length-1)}re.fc();var hn=[null,Re,kt,at,bt,yt,_t,wt,vt,xt,Tt,St,At,Ot,Et,P,M,U,W,te,V,me,Pe,We,et,zt],pn={b:function(T){return Lt(T+24)+24},n:function(T){return(T=new se(T)).uc()||(T.hc(!0),ae--),T.ic(!1),st.push(T),T.sc(),T.vc()},ma:function(T){throw j("Unexpected exception thrown, this is not properly supported - aborting"),be=!0,T},x:function(){he(0);var T=st.pop();if(T.Hc()&&!T.kc()){var E=T.Dc();E&&we(E)(T.Zb),gt(T.Zb)}ie=0},e:function(){var T=ie;if(!T)return ze=0;var E=new se(T);E.cc(T);var k=E.bc();if(!k)return ze=0,T;for(var C=Array.prototype.slice.call(arguments),z=0;z<C.length;z++){var G=C[z];if(G===0||G===k)break;if(Nt(G,k,E.Sb+16))return ze=G,T}return ze=k,T},l:function(){var T=ie;if(!T)return ze=0;var E=new se(T);E.cc(T);var k=E.bc();if(!k)return ze=0,T;for(var C=Array.prototype.slice.call(arguments),z=0;z<C.length;z++){var G=C[z];if(G===0||G===k)break;if(Nt(G,k,E.Sb+16))return ze=G,T}return ze=k,T},h:function(){var T=ie;if(!T)return ze=0;var E=new se(T);E.cc(T);var k=E.bc();if(!k)return ze=0,T;for(var C=Array.prototype.slice.call(arguments),z=0;z<C.length;z++){var G=C[z];if(G===0||G===k)break;if(Nt(G,k,E.Sb+16))return ze=G,T}return ze=k,T},t:gt,M:function(){var T=st.pop();T||fe("no exception to throw");var E=T.Zb;throw T.kc()||(st.push(T),T.ic(!0),T.hc(!1),ae++),ie=E,E},c:function(T,E,k){throw new se(T).fc(E,k),ie=T,ae++,T},pa:function(){return ae},Fa:function(T){Ht(T,!O,1,!A),re.pc()},T:function(T){I?postMessage({cmd:"cleanupThread",thread:T}):ct(T)},xa:mt,j:function(T){throw ie||(ie=T),T},H:bt,Ma:yt,ua:_t,wa:wt,oa:vt,Ka:xt,Ca:Tt,Ja:St,V:At,va:Ot,sa:Et,La:P,ta:M,Ta:function(){},X:function(){fe("To use dlopen, you need enable dynamic linking, see https://github.com/emscripten-core/emscripten/wiki/Linking")},Ua:function(){fe("To use dlopen, you need enable dynamic linking, see https://github.com/emscripten-core/emscripten/wiki/Linking")},W:function(){return Date.now()},ya:function(){return 2097152},Oa:function(){return!0},za:function(T,E,k,C){if(T==E)setTimeout(()=>$(C));else if(I)postMessage({targetThread:T,cmd:"processProxyingQueue",queue:C});else{if(!(T=re.Vb[T]))return;T.postMessage({cmd:"processProxyingQueue",queue:C})}return 1},Ea:function(){return-1},Pa:function(T,E){T=new Date(1e3*R(T)),f()[E>>2>>>0]=T.getUTCSeconds(),f()[E+4>>2>>>0]=T.getUTCMinutes(),f()[E+8>>2>>>0]=T.getUTCHours(),f()[E+12>>2>>>0]=T.getUTCDate(),f()[E+16>>2>>>0]=T.getUTCMonth(),f()[E+20>>2>>>0]=T.getUTCFullYear()-1900,f()[E+24>>2>>>0]=T.getUTCDay(),T=(T.getTime()-Date.UTC(T.getUTCFullYear(),0,1,0,0,0,0))/864e5|0,f()[E+28>>2>>>0]=T},Qa:function(T,E){T=new Date(1e3*R(T)),f()[E>>2>>>0]=T.getSeconds(),f()[E+4>>2>>>0]=T.getMinutes(),f()[E+8>>2>>>0]=T.getHours(),f()[E+12>>2>>>0]=T.getDate(),f()[E+16>>2>>>0]=T.getMonth(),f()[E+20>>2>>>0]=T.getFullYear()-1900,f()[E+24>>2>>>0]=T.getDay();var k=new Date(T.getFullYear(),0,1),C=(T.getTime()-k.getTime())/864e5|0;f()[E+28>>2>>>0]=C,f()[E+36>>2>>>0]=-60*T.getTimezoneOffset(),C=new Date(T.getFullYear(),6,1).getTimezoneOffset(),T=0|(C!=(k=k.getTimezoneOffset())&&T.getTimezoneOffset()==Math.min(k,C)),f()[E+32>>2>>>0]=T},Ra:function(T){var E=new Date(f()[T+20>>2>>>0]+1900,f()[T+16>>2>>>0],f()[T+12>>2>>>0],f()[T+8>>2>>>0],f()[T+4>>2>>>0],f()[T>>2>>>0],0),k=f()[T+32>>2>>>0],C=E.getTimezoneOffset(),z=new Date(E.getFullYear(),0,1),G=new Date(E.getFullYear(),6,1).getTimezoneOffset(),K=z.getTimezoneOffset(),ne=Math.min(K,G);return 0>k?f()[T+32>>2>>>0]=+(G!=K&&ne==C):0<k!=(ne==C)&&(G=Math.max(K,G),E.setTime(E.getTime()+6e4*((0<k?ne:G)-C))),f()[T+24>>2>>>0]=E.getDay(),k=(E.getTime()-z.getTime())/864e5|0,f()[T+28>>2>>>0]=k,f()[T>>2>>>0]=E.getSeconds(),f()[T+4>>2>>>0]=E.getMinutes(),f()[T+8>>2>>>0]=E.getHours(),f()[T+12>>2>>>0]=E.getDate(),f()[T+16>>2>>>0]=E.getMonth(),E.getTime()/1e3|0},Aa:U,Ba:W,Sa:function T(E,k,C){T.Ac||(T.Ac=!0,te(E,k,C))},y:function(){fe("")},U:function(){if(!x&&!O){var T="Blocking on the main thread is very dangerous, see https://emscripten.org/docs/porting/pthreads.html#blocking-on-the-main-browser-thread";ot||(ot={}),ot[T]||(ot[T]=1,x&&(T="warning: "+T),j(T))}},ra:function(){return 4294901760},B:ft,Ia:function(T,E,k){h().copyWithin(T>>>0,E>>>0,E+k>>>0)},F:function(){return x?a(3993).cpus().length:navigator.hardwareConcurrency},Da:function(T,E,k){Te.length=E,k>>=3;for(var C=0;C<E;C++)Te[C]=o()[k+C>>>0];return(0>T?Pt[-T-1]:hn[T]).apply(null,Te)},qa:function(T){var E=h().length;if((T>>>=0)<=E||4294901760<T)return!1;for(var k=1;4>=k;k*=2){var C=E*(1+.2/k);C=Math.min(C,T+100663296);var z=Math;C=Math.max(T,C),z=z.min.call(z,4294901760,C+(65536-C%65536)%65536);e:{try{X.grow(z-ee.byteLength+65535>>>16),Ee(X.buffer);var G=1;break e}catch{}G=void 0}if(G)return!0}return!1},Na:function(){throw"unwind"},Ga:V,Ha:me,J:it,I:Pe,S:We,ga:et,R:zt,d:function(){return ze},na:function T(E,k){T.lc||(T.lc=function(){if(typeof crypto=="object"&&typeof crypto.getRandomValues=="function"){var z=new Uint8Array(1);return()=>(crypto.getRandomValues(z),z[0])}if(x)try{var G=a(Object(function(){var K=new Error("Cannot find module 'crypto'");throw K.code="MODULE_NOT_FOUND",K}()));return()=>G.randomBytes(1)[0]}catch{}return()=>fe("randomDevice")}());for(var C=0;C<k;C++)s()[E+C>>0>>>0]=T.lc();return 0},ia:function(T,E,k){var C=de();try{return we(T)(E,k)}catch(z){if(ce(C),z!==z+0)throw z;he(1,0)}},ja:function(T,E,k){var C=de();try{return we(T)(E,k)}catch(z){if(ce(C),z!==z+0)throw z;he(1,0)}},K:function(T){var E=de();try{return we(T)()}catch(k){if(ce(E),k!==k+0)throw k;he(1,0)}},f:function(T,E){var k=de();try{return we(T)(E)}catch(C){if(ce(k),C!==C+0)throw C;he(1,0)}},P:function(T,E,k){var C=de();try{return we(T)(E,k)}catch(z){if(ce(C),z!==z+0)throw z;he(1,0)}},Q:function(T,E,k){var C=de();try{return we(T)(E,k)}catch(z){if(ce(C),z!==z+0)throw z;he(1,0)}},k:function(T,E,k){var C=de();try{return we(T)(E,k)}catch(z){if(ce(C),z!==z+0)throw z;he(1,0)}},p:function(T,E,k,C){var z=de();try{return we(T)(E,k,C)}catch(G){if(ce(z),G!==G+0)throw G;he(1,0)}},q:function(T,E,k,C,z){var G=de();try{return we(T)(E,k,C,z)}catch(K){if(ce(G),K!==K+0)throw K;he(1,0)}},N:function(T,E,k,C,z,G){var K=de();try{return we(T)(E,k,C,z,G)}catch(ne){if(ce(K),ne!==ne+0)throw ne;he(1,0)}},s:function(T,E,k,C,z,G){var K=de();try{return we(T)(E,k,C,z,G)}catch(ne){if(ce(K),ne!==ne+0)throw ne;he(1,0)}},w:function(T,E,k,C,z,G,K){var ne=de();try{return we(T)(E,k,C,z,G,K)}catch(pe){if(ce(ne),pe!==pe+0)throw pe;he(1,0)}},L:function(T,E,k,C,z,G,K,ne){var pe=de();try{return we(T)(E,k,C,z,G,K,ne)}catch(ge){if(ce(pe),ge!==ge+0)throw ge;he(1,0)}},E:function(T,E,k,C,z,G,K,ne,pe,ge,De,Ke){var He=de();try{return we(T)(E,k,C,z,G,K,ne,pe,ge,De,Ke)}catch(q){if(ce(He),q!==q+0)throw q;he(1,0)}},aa:function(T,E,k,C,z,G,K,ne){var pe=de();try{return un(T,E,k,C,z,G,K,ne)}catch(ge){if(ce(pe),ge!==ge+0)throw ge;he(1,0)}},_:function(T,E,k,C,z,G,K){var ne=de();try{return en(T,E,k,C,z,G,K)}catch(pe){if(ce(ne),pe!==pe+0)throw pe;he(1,0)}},Z:function(T,E,k,C,z){var G=de();try{return ln(T,E,k,C,z)}catch(K){if(ce(G),K!==K+0)throw K;he(1,0)}},ca:function(T,E,k,C){var z=de();try{return sn(T,E,k,C)}catch(G){if(ce(z),G!==G+0)throw G;he(1,0)}},$:function(T){var E=de();try{return Jt(T)}catch(k){if(ce(E),k!==k+0)throw k;he(1,0)}},ba:function(T,E){var k=de();try{return an(T,E)}catch(C){if(ce(k),C!==C+0)throw C;he(1,0)}},Y:function(T,E,k){var C=de();try{return tn(T,E,k)}catch(z){if(ce(C),z!==z+0)throw z;he(1,0)}},g:function(T){var E=de();try{we(T)()}catch(k){if(ce(E),k!==k+0)throw k;he(1,0)}},r:function(T,E){var k=de();try{we(T)(E)}catch(C){if(ce(k),C!==C+0)throw C;he(1,0)}},i:function(T,E,k){var C=de();try{we(T)(E,k)}catch(z){if(ce(C),z!==z+0)throw z;he(1,0)}},ha:function(T,E,k,C){var z=de();try{we(T)(E,k,C)}catch(G){if(ce(z),G!==G+0)throw G;he(1,0)}},m:function(T,E,k,C){var z=de();try{we(T)(E,k,C)}catch(G){if(ce(z),G!==G+0)throw G;he(1,0)}},v:function(T,E,k,C,z){var G=de();try{we(T)(E,k,C,z)}catch(K){if(ce(G),K!==K+0)throw K;he(1,0)}},u:function(T,E,k,C,z,G){var K=de();try{we(T)(E,k,C,z,G)}catch(ne){if(ce(K),ne!==ne+0)throw ne;he(1,0)}},O:function(T,E,k,C,z,G,K){var ne=de();try{we(T)(E,k,C,z,G,K)}catch(pe){if(ce(ne),pe!==pe+0)throw pe;he(1,0)}},A:function(T,E,k,C,z,G,K,ne){var pe=de();try{we(T)(E,k,C,z,G,K,ne)}catch(ge){if(ce(pe),ge!==ge+0)throw ge;he(1,0)}},ka:function(T,E,k,C,z,G,K,ne,pe){var ge=de();try{we(T)(E,k,C,z,G,K,ne,pe)}catch(De){if(ce(ge),De!==De+0)throw De;he(1,0)}},C:function(T,E,k,C,z,G,K,ne,pe,ge,De){var Ke=de();try{we(T)(E,k,C,z,G,K,ne,pe,ge,De)}catch(He){if(ce(Ke),He!==He+0)throw He;he(1,0)}},D:function(T,E,k,C,z,G,K,ne,pe,ge,De,Ke,He,q,ye,Me){var tt=de();try{we(T)(E,k,C,z,G,K,ne,pe,ge,De,Ke,He,q,ye,Me)}catch(ht){if(ce(tt),ht!==ht+0)throw ht;he(1,0)}},fa:function(T,E,k,C,z,G,K,ne){var pe=de();try{nn(T,E,k,C,z,G,K,ne)}catch(ge){if(ce(pe),ge!==ge+0)throw ge;he(1,0)}},da:function(T,E,k,C,z,G,K,ne,pe,ge,De,Ke){var He=de();try{on(T,E,k,C,z,G,K,ne,pe,ge,De,Ke)}catch(q){if(ce(He),q!==q+0)throw q;he(1,0)}},ea:function(T,E,k,C,z,G){var K=de();try{rn(T,E,k,C,z,G)}catch(ne){if(ce(K),ne!==ne+0)throw ne;he(1,0)}},o:function(T){return T},a:X||t.wasmMemory,G:function(T){ze=T},la:Gt,z:function(T,E,k,C){return Gt(T,E,k,C)}};(function(){function T(z,G){t.asm=z.exports,re.qc.push(t.asm.sb),Be=t.asm.ub,Ve.unshift(t.asm.Va),J=G,I||(je--,t.monitorRunDependencies&&t.monitorRunDependencies(je),je==0&&Ye&&(z=Ye,Ye=null,z()))}function E(z){T(z.instance,z.module)}function k(z){return function(){if(!H&&(A||O)){if(typeof fetch=="function"&&!Ie.startsWith("file://"))return fetch(Ie,{credentials:"same-origin"}).then(function(G){if(!G.ok)throw"failed to load wasm binary file at '"+Ie+"'";return G.arrayBuffer()}).catch(function(){return lt()});if(d)return new Promise(function(G,K){d(Ie,function(ne){G(new Uint8Array(ne))},K)})}return Promise.resolve().then(function(){return lt()})}().then(function(G){return WebAssembly.instantiate(G,C)}).then(function(G){return G}).then(z,function(G){j("failed to asynchronously prepare wasm: "+G),fe(G)})}var C={a:pn};if(I||(je++,t.monitorRunDependencies&&t.monitorRunDependencies(je)),t.instantiateWasm)try{return t.instantiateWasm(C,T)}catch(z){return j("Module.instantiateWasm callback failed with error: "+z),!1}(H||typeof WebAssembly.instantiateStreaming!="function"||pt()||Ie.startsWith("file://")||x||typeof fetch!="function"?k(E):fetch(Ie,{credentials:"same-origin"}).then(function(z){return WebAssembly.instantiateStreaming(z,C).then(E,function(G){return j("wasm streaming compile failed: "+G),j("falling back to ArrayBuffer instantiation"),k(E)})})).catch(r)})(),t.___wasm_call_ctors=function(){return(t.___wasm_call_ctors=t.asm.Va).apply(null,arguments)},t._OrtInit=function(){return(t._OrtInit=t.asm.Wa).apply(null,arguments)},t._OrtCreateSessionOptions=function(){return(t._OrtCreateSessionOptions=t.asm.Xa).apply(null,arguments)},t._OrtAppendExecutionProvider=function(){return(t._OrtAppendExecutionProvider=t.asm.Ya).apply(null,arguments)},t._OrtAddSessionConfigEntry=function(){return(t._OrtAddSessionConfigEntry=t.asm.Za).apply(null,arguments)},t._OrtReleaseSessionOptions=function(){return(t._OrtReleaseSessionOptions=t.asm._a).apply(null,arguments)},t._OrtCreateSession=function(){return(t._OrtCreateSession=t.asm.$a).apply(null,arguments)},t._OrtReleaseSession=function(){return(t._OrtReleaseSession=t.asm.ab).apply(null,arguments)},t._OrtGetInputCount=function(){return(t._OrtGetInputCount=t.asm.bb).apply(null,arguments)},t._OrtGetOutputCount=function(){return(t._OrtGetOutputCount=t.asm.cb).apply(null,arguments)},t._OrtGetInputName=function(){return(t._OrtGetInputName=t.asm.db).apply(null,arguments)},t._OrtGetOutputName=function(){return(t._OrtGetOutputName=t.asm.eb).apply(null,arguments)},t._OrtFree=function(){return(t._OrtFree=t.asm.fb).apply(null,arguments)},t._OrtCreateTensor=function(){return(t._OrtCreateTensor=t.asm.gb).apply(null,arguments)},t._OrtGetTensorData=function(){return(t._OrtGetTensorData=t.asm.hb).apply(null,arguments)},t._OrtReleaseTensor=function(){return(t._OrtReleaseTensor=t.asm.ib).apply(null,arguments)},t._OrtCreateRunOptions=function(){return(t._OrtCreateRunOptions=t.asm.jb).apply(null,arguments)},t._OrtAddRunConfigEntry=function(){return(t._OrtAddRunConfigEntry=t.asm.kb).apply(null,arguments)},t._OrtReleaseRunOptions=function(){return(t._OrtReleaseRunOptions=t.asm.lb).apply(null,arguments)},t._OrtRun=function(){return(t._OrtRun=t.asm.mb).apply(null,arguments)},t._OrtEndProfiling=function(){return(t._OrtEndProfiling=t.asm.nb).apply(null,arguments)};var Dt=t._pthread_self=function(){return(Dt=t._pthread_self=t.asm.ob).apply(null,arguments)},Lt=t._malloc=function(){return(Lt=t._malloc=t.asm.pb).apply(null,arguments)},Vt=t._free=function(){return(Vt=t._free=t.asm.qb).apply(null,arguments)},qt=t._fflush=function(){return(qt=t._fflush=t.asm.rb).apply(null,arguments)};t.__emscripten_tls_init=function(){return(t.__emscripten_tls_init=t.asm.sb).apply(null,arguments)};var Wt=t.___funcs_on_exit=function(){return(Wt=t.___funcs_on_exit=t.asm.tb).apply(null,arguments)},Ht=t.__emscripten_thread_init=function(){return(Ht=t.__emscripten_thread_init=t.asm.vb).apply(null,arguments)};t.__emscripten_thread_crashed=function(){return(t.__emscripten_thread_crashed=t.asm.wb).apply(null,arguments)};var Ct,Xt=t._emscripten_run_in_main_runtime_thread_js=function(){return(Xt=t._emscripten_run_in_main_runtime_thread_js=t.asm.xb).apply(null,arguments)},Yt=t.__emscripten_proxy_execute_task_queue=function(){return(Yt=t.__emscripten_proxy_execute_task_queue=t.asm.yb).apply(null,arguments)},Rt=t.__emscripten_thread_free_data=function(){return(Rt=t.__emscripten_thread_free_data=t.asm.zb).apply(null,arguments)},Kt=t.__emscripten_thread_exit=function(){return(Kt=t.__emscripten_thread_exit=t.asm.Ab).apply(null,arguments)},he=t._setThrew=function(){return(he=t._setThrew=t.asm.Bb).apply(null,arguments)},Zt=t._emscripten_stack_set_limits=function(){return(Zt=t._emscripten_stack_set_limits=t.asm.Cb).apply(null,arguments)},de=t.stackSave=function(){return(de=t.stackSave=t.asm.Db).apply(null,arguments)},ce=t.stackRestore=function(){return(ce=t.stackRestore=t.asm.Eb).apply(null,arguments)},jt=t.stackAlloc=function(){return(jt=t.stackAlloc=t.asm.Fb).apply(null,arguments)},Nt=t.___cxa_can_catch=function(){return(Nt=t.___cxa_can_catch=t.asm.Gb).apply(null,arguments)},Qt=t.___cxa_is_pointer_type=function(){return(Qt=t.___cxa_is_pointer_type=t.asm.Hb).apply(null,arguments)},Jt=t.dynCall_j=function(){return(Jt=t.dynCall_j=t.asm.Ib).apply(null,arguments)},en=t.dynCall_iiiiij=function(){return(en=t.dynCall_iiiiij=t.asm.Jb).apply(null,arguments)},tn=t.dynCall_jii=function(){return(tn=t.dynCall_jii=t.asm.Kb).apply(null,arguments)},nn=t.dynCall_viiiiij=function(){return(nn=t.dynCall_viiiiij=t.asm.Lb).apply(null,arguments)},rn=t.dynCall_vjji=function(){return(rn=t.dynCall_vjji=t.asm.Mb).apply(null,arguments)},on=t.dynCall_viiijjjii=function(){return(on=t.dynCall_viiijjjii=t.asm.Nb).apply(null,arguments)},sn=t.dynCall_iij=function(){return(sn=t.dynCall_iij=t.asm.Ob).apply(null,arguments)},an=t.dynCall_ji=function(){return(an=t.dynCall_ji=t.asm.Pb).apply(null,arguments)},un=t.dynCall_iiiiiij=function(){return(un=t.dynCall_iiiiiij=t.asm.Qb).apply(null,arguments)},ln=t.dynCall_iiij=function(){return(ln=t.dynCall_iiij=t.asm.Rb).apply(null,arguments)};function cn(){function T(){if(!Ct&&(Ct=!0,t.calledRun=!0,!be)&&(I||rt(Ve),e(t),t.onRuntimeInitialized&&t.onRuntimeInitialized(),!I)){if(t.postRun)for(typeof t.postRun=="function"&&(t.postRun=[t.postRun]);t.postRun.length;){var E=t.postRun.shift();Ze.unshift(E)}rt(Ze)}}if(!(0<je))if(I)e(t),I||rt(Ve),postMessage({cmd:"loaded"});else{if(t.preRun)for(typeof t.preRun=="function"&&(t.preRun=[t.preRun]);t.preRun.length;)Ue();rt(Ge),0<je||(t.setStatus?(t.setStatus("Running..."),setTimeout(function(){setTimeout(function(){t.setStatus("")},1),T()},1)):T())}}if(t.UTF8ToString=xe,t.stringToUTF8=function(T,E,k){return Ne(T,h(),E,k)},t.lengthBytesUTF8=Ce,t.keepRuntimeAlive=qe,t.wasmMemory=X,t.stackSave=de,t.stackRestore=ce,t.stackAlloc=jt,t.ExitStatus=Qe,t.PThread=re,Ye=function T(){Ct||cn(),Ct||(Ye=T)},t.preInit)for(typeof t.preInit=="function"&&(t.preInit=[t.preInit]);0<t.preInit.length;)t.preInit.pop()();return cn(),p.ready});y.exports=c},932:(y,n,a)=>{var u,c=(u=(u=typeof document<"u"&&document.currentScript?document.currentScript.src:void 0)||"/index.js",function(p){var s,h,f;p=p||{},s||(s=p!==void 0?p:{}),s.ready=new Promise(function(P,M){h=P,f=M});var l,o,t,e,r,i,d=Object.assign({},s),g="./this.program",m=(P,M)=>{throw M},b=typeof window=="object",_=typeof importScripts=="function",v=typeof process=="object"&&typeof process.versions=="object"&&typeof process.versions.node=="string",w="";v?(w=_?a(908).dirname(w)+"/":"//",i=()=>{r||(e=a(1384),r=a(908))},l=function(P,M){return i(),P=r.normalize(P),e.readFileSync(P,M?void 0:"utf8")},t=P=>((P=l(P,!0)).buffer||(P=new Uint8Array(P)),P),o=(P,M,$)=>{i(),P=r.normalize(P),e.readFile(P,function(R,U){R?$(R):M(U.buffer)})},1<process.argv.length&&(g=process.argv[1].replace(/\\/g,"/")),process.argv.slice(2),process.on("uncaughtException",function(P){if(!(P instanceof Ve))throw P}),process.on("unhandledRejection",function(P){throw P}),m=(P,M)=>{if(x||0<ke)throw process.exitCode=P,M;M instanceof Ve||O("exiting due to exception: "+M),process.exit(P)},s.inspect=function(){return"[Emscripten Module object]"}):(b||_)&&(_?w=self.location.href:typeof document<"u"&&document.currentScript&&(w=document.currentScript.src),u&&(w=u),w=w.indexOf("blob:")!==0?w.substr(0,w.replace(/[?#].*/,"").lastIndexOf("/")+1):"",l=P=>{var M=new XMLHttpRequest;return M.open("GET",P,!1),M.send(null),M.responseText},_&&(t=P=>{var M=new XMLHttpRequest;return M.open("GET",P,!1),M.responseType="arraybuffer",M.send(null),new Uint8Array(M.response)}),o=(P,M,$)=>{var R=new XMLHttpRequest;R.open("GET",P,!0),R.responseType="arraybuffer",R.onload=()=>{R.status==200||R.status==0&&R.response?M(R.response):$()},R.onerror=$,R.send(null)});var S,A=s.print||console.log.bind(console),O=s.printErr||console.warn.bind(console);Object.assign(s,d),d=null,s.thisProgram&&(g=s.thisProgram),s.quit&&(m=s.quit),s.wasmBinary&&(S=s.wasmBinary);var x=s.noExitRuntime||!1;typeof WebAssembly!="object"&&Ee("no native wasm support detected");var I,N,B,L,F,H,D=!1,j=typeof TextDecoder<"u"?new TextDecoder("utf8"):void 0;function Z(P,M,$){var R=(M>>>=0)+$;for($=M;P[$]&&!($>=R);)++$;if(16<$-M&&P.buffer&&j)return j.decode(P.subarray(M,$));for(R="";M<$;){var U=P[M++];if(128&U){var W=63&P[M++];if((224&U)==192)R+=String.fromCharCode((31&U)<<6|W);else{var Y=63&P[M++];65536>(U=(240&U)==224?(15&U)<<12|W<<6|Y:(7&U)<<18|W<<12|Y<<6|63&P[M++])?R+=String.fromCharCode(U):(U-=65536,R+=String.fromCharCode(55296|U>>10,56320|1023&U))}}else R+=String.fromCharCode(U)}return R}function X(P,M){return(P>>>=0)?Z(L,P,M):""}function J(P,M,$,R){if(!(0<R))return 0;var U=$>>>=0;R=$+R-1;for(var W=0;W<P.length;++W){var Y=P.charCodeAt(W);if(55296<=Y&&57343>=Y&&(Y=65536+((1023&Y)<<10)|1023&P.charCodeAt(++W)),127>=Y){if($>=R)break;M[$++>>>0]=Y}else{if(2047>=Y){if($+1>=R)break;M[$++>>>0]=192|Y>>6}else{if(65535>=Y){if($+2>=R)break;M[$++>>>0]=224|Y>>12}else{if($+3>=R)break;M[$++>>>0]=240|Y>>18,M[$++>>>0]=128|Y>>12&63}M[$++>>>0]=128|Y>>6&63}M[$++>>>0]=128|63&Y}}return M[$>>>0]=0,$-U}function ee(P){for(var M=0,$=0;$<P.length;++$){var R=P.charCodeAt($);127>=R?M++:2047>=R?M+=2:55296<=R&&57343>=R?(M+=4,++$):M+=3}return M}function ue(){var P=I.buffer;N=P,s.HEAP8=B=new Int8Array(P),s.HEAP16=new Int16Array(P),s.HEAP32=F=new Int32Array(P),s.HEAPU8=L=new Uint8Array(P),s.HEAPU16=new Uint16Array(P),s.HEAPU32=H=new Uint32Array(P),s.HEAPF32=new Float32Array(P),s.HEAPF64=new Float64Array(P)}var Ae,ve=[],oe=[],_e=[],be=[],ke=0;function Fe(){var P=s.preRun.shift();ve.unshift(P)}var xe,Ne=0,Ce=null;function Ee(P){throw s.onAbort&&s.onAbort(P),O(P="Aborted("+P+")"),D=!0,P=new WebAssembly.RuntimeError(P+". Build with -sASSERTIONS for more info."),f(P),P}function Oe(){return xe.startsWith("data:application/octet-stream;base64,")}if(xe="ort-wasm.wasm",!Oe()){var Be=xe;xe=s.locateFile?s.locateFile(Be,w):w+Be}function Ge(){var P=xe;try{if(P==xe&&S)return new Uint8Array(S);if(t)return t(P);throw"both async and sync fetching of the wasm failed"}catch(M){Ee(M)}}function Ve(P){this.name="ExitStatus",this.message="Program terminated with exit("+P+")",this.status=P}function Xe(P){for(;0<P.length;)P.shift()(s)}var Ze=[],qe=0,Ue=0;function Ie(P){this.Db=P,this.zb=P-24,this.Ub=function(M){H[this.zb+4>>2>>>0]=M},this.Eb=function(){return H[this.zb+4>>2>>>0]},this.Sb=function(M){H[this.zb+8>>2>>>0]=M},this.Wb=function(){return H[this.zb+8>>2>>>0]},this.Tb=function(){F[this.zb>>2>>>0]=0},this.Ib=function(M){B[this.zb+12>>0>>>0]=M?1:0},this.Pb=function(){return B[this.zb+12>>0>>>0]!=0},this.Jb=function(M){B[this.zb+13>>0>>>0]=M?1:0},this.Lb=function(){return B[this.zb+13>>0>>>0]!=0},this.Rb=function(M,$){this.Fb(0),this.Ub(M),this.Sb($),this.Tb(),this.Ib(!1),this.Jb(!1)},this.Nb=function(){F[this.zb>>2>>>0]+=1},this.Xb=function(){var M=F[this.zb>>2>>>0];return F[this.zb>>2>>>0]=M-1,M===1},this.Fb=function(M){H[this.zb+16>>2>>>0]=M},this.Ob=function(){return H[this.zb+16>>2>>>0]},this.Qb=function(){if(mt(this.Eb()))return H[this.Db>>2>>>0];var M=this.Ob();return M!==0?M:this.Db}}function je(P){return ot(new Ie(P).zb)}var Ye=[];function fe(P){var M=Ye[P];return M||(P>=Ye.length&&(Ye.length=P+1),Ye[P]=M=Ae.get(P)),M}function pt(P){var M=ee(P)+1,$=we(M);return $&&J(P,B,$,M),$}var lt={};function Pt(){if(!Qe){var P,M={USER:"web_user",LOGNAME:"web_user",PATH:"/",PWD:"/",HOME:"/home/web_user",LANG:(typeof navigator=="object"&&navigator.languages&&navigator.languages[0]||"C").replace("-","_")+".UTF-8",_:g||"./this.program"};for(P in lt)lt[P]===void 0?delete M[P]:M[P]=lt[P];var $=[];for(P in M)$.push(P+"="+M[P]);Qe=$}return Qe}var Qe,ct=[null,[],[]];function dt(P,M){var $=ct[P];M===0||M===10?((P===1?A:O)(Z($,0)),$.length=0):$.push(M)}var Re=0;function it(P){return P%4==0&&(P%100!=0||P%400==0)}var re=[31,29,31,30,31,30,31,31,30,31,30,31],rt=[31,28,31,30,31,30,31,31,30,31,30,31];function It(P,M,$,R){function U(V,me,Pe){for(V=typeof V=="number"?V.toString():V||"";V.length<me;)V=Pe[0]+V;return V}function W(V,me){return U(V,me,"0")}function Y(V,me){function Pe(et){return 0>et?-1:0<et?1:0}var We;return(We=Pe(V.getFullYear()-me.getFullYear()))===0&&(We=Pe(V.getMonth()-me.getMonth()))===0&&(We=Pe(V.getDate()-me.getDate())),We}function te(V){switch(V.getDay()){case 0:return new Date(V.getFullYear()-1,11,29);case 1:return V;case 2:return new Date(V.getFullYear(),0,3);case 3:return new Date(V.getFullYear(),0,2);case 4:return new Date(V.getFullYear(),0,1);case 5:return new Date(V.getFullYear()-1,11,31);case 6:return new Date(V.getFullYear()-1,11,30)}}function Q(V){var me=V.Bb;for(V=new Date(new Date(V.Cb+1900,0,1).getTime());0<me;){var Pe=V.getMonth(),We=(it(V.getFullYear())?re:rt)[Pe];if(!(me>We-V.getDate())){V.setDate(V.getDate()+me);break}me-=We-V.getDate()+1,V.setDate(1),11>Pe?V.setMonth(Pe+1):(V.setMonth(0),V.setFullYear(V.getFullYear()+1))}return Pe=new Date(V.getFullYear()+1,0,4),me=te(new Date(V.getFullYear(),0,4)),Pe=te(Pe),0>=Y(me,V)?0>=Y(Pe,V)?V.getFullYear()+1:V.getFullYear():V.getFullYear()-1}var le=F[R+40>>2>>>0];for(var Te in R={$b:F[R>>2>>>0],Zb:F[R+4>>2>>>0],Gb:F[R+8>>2>>>0],Kb:F[R+12>>2>>>0],Hb:F[R+16>>2>>>0],Cb:F[R+20>>2>>>0],Ab:F[R+24>>2>>>0],Bb:F[R+28>>2>>>0],bc:F[R+32>>2>>>0],Yb:F[R+36>>2>>>0],ac:le?X(le):""},$=X($),le={"%c":"%a %b %d %H:%M:%S %Y","%D":"%m/%d/%y","%F":"%Y-%m-%d","%h":"%b","%r":"%I:%M:%S %p","%R":"%H:%M","%T":"%H:%M:%S","%x":"%m/%d/%y","%X":"%H:%M:%S","%Ec":"%c","%EC":"%C","%Ex":"%m/%d/%y","%EX":"%H:%M:%S","%Ey":"%y","%EY":"%Y","%Od":"%d","%Oe":"%e","%OH":"%H","%OI":"%I","%Om":"%m","%OM":"%M","%OS":"%S","%Ou":"%u","%OU":"%U","%OV":"%V","%Ow":"%w","%OW":"%W","%Oy":"%y"})$=$.replace(new RegExp(Te,"g"),le[Te]);var Le="Sunday Monday Tuesday Wednesday Thursday Friday Saturday".split(" "),$e="January February March April May June July August September October November December".split(" ");for(Te in le={"%a":function(V){return Le[V.Ab].substring(0,3)},"%A":function(V){return Le[V.Ab]},"%b":function(V){return $e[V.Hb].substring(0,3)},"%B":function(V){return $e[V.Hb]},"%C":function(V){return W((V.Cb+1900)/100|0,2)},"%d":function(V){return W(V.Kb,2)},"%e":function(V){return U(V.Kb,2," ")},"%g":function(V){return Q(V).toString().substring(2)},"%G":function(V){return Q(V)},"%H":function(V){return W(V.Gb,2)},"%I":function(V){return(V=V.Gb)==0?V=12:12<V&&(V-=12),W(V,2)},"%j":function(V){for(var me=0,Pe=0;Pe<=V.Hb-1;me+=(it(V.Cb+1900)?re:rt)[Pe++]);return W(V.Kb+me,3)},"%m":function(V){return W(V.Hb+1,2)},"%M":function(V){return W(V.Zb,2)},"%n":function(){return` -`},"%p":function(V){return 0<=V.Gb&&12>V.Gb?"AM":"PM"},"%S":function(V){return W(V.$b,2)},"%t":function(){return" "},"%u":function(V){return V.Ab||7},"%U":function(V){return W(Math.floor((V.Bb+7-V.Ab)/7),2)},"%V":function(V){var me=Math.floor((V.Bb+7-(V.Ab+6)%7)/7);if(2>=(V.Ab+371-V.Bb-2)%7&&me++,me)me==53&&((Pe=(V.Ab+371-V.Bb)%7)==4||Pe==3&&it(V.Cb)||(me=1));else{me=52;var Pe=(V.Ab+7-V.Bb-1)%7;(Pe==4||Pe==5&&it(V.Cb%400-1))&&me++}return W(me,2)},"%w":function(V){return V.Ab},"%W":function(V){return W(Math.floor((V.Bb+7-(V.Ab+6)%7)/7),2)},"%y":function(V){return(V.Cb+1900).toString().substring(2)},"%Y":function(V){return V.Cb+1900},"%z":function(V){var me=0<=(V=V.Yb);return V=Math.abs(V)/60,(me?"+":"-")+("0000"+(V/60*100+V%60)).slice(-4)},"%Z":function(V){return V.ac},"%%":function(){return"%"}},$=$.replace(/%%/g,"\0\0"),le)$.includes(Te)&&($=$.replace(new RegExp(Te,"g"),le[Te](R)));return Te=function(V){var me=Array(ee(V)+1);return J(V,me,0,me.length),me}($=$.replace(/\0\0/g,"%")),Te.length>M?0:(B.set(Te,P>>>0),Te.length-1)}var kt={a:function(P){return we(P+24)+24},m:function(P){return(P=new Ie(P)).Pb()||(P.Ib(!0),qe--),P.Jb(!1),Ze.push(P),P.Nb(),P.Qb()},ia:function(P){throw O("Unexpected exception thrown, this is not properly supported - aborting"),D=!0,P},w:function(){ae(0);var P=Ze.pop();if(P.Xb()&&!P.Lb()){var M=P.Wb();M&&fe(M)(P.Db),je(P.Db)}Ue=0},d:function(){var P=Ue;if(!P)return Re=0;var M=new Ie(P);M.Fb(P);var $=M.Eb();if(!$)return Re=0,P;for(var R=Array.prototype.slice.call(arguments),U=0;U<R.length;U++){var W=R[U];if(W===0||W===$)break;if(at(W,$,M.zb+16))return Re=W,P}return Re=$,P},k:function(){var P=Ue;if(!P)return Re=0;var M=new Ie(P);M.Fb(P);var $=M.Eb();if(!$)return Re=0,P;for(var R=Array.prototype.slice.call(arguments),U=0;U<R.length;U++){var W=R[U];if(W===0||W===$)break;if(at(W,$,M.zb+16))return Re=W,P}return Re=$,P},g:function(){var P=Ue;if(!P)return Re=0;var M=new Ie(P);M.Fb(P);var $=M.Eb();if(!$)return Re=0,P;for(var R=Array.prototype.slice.call(arguments),U=0;U<R.length;U++){var W=R[U];if(W===0||W===$)break;if(at(W,$,M.zb+16))return Re=W,P}return Re=$,P},s:je,L:function(){var P=Ze.pop();P||Ee("no exception to throw");var M=P.Db;throw P.Lb()||(Ze.push(P),P.Jb(!0),P.Ib(!1),qe++),Ue=M,M},b:function(P,M,$){throw new Ie(P).Rb(M,$),Ue=P,qe++,P},la:function(){return qe},i:function(P){throw Ue||(Ue=P),P},H:function(){return 0},Ba:function(){},pa:function(){},ra:function(){},ka:function(){return 0},za:function(){},ua:function(){},ya:function(){},R:function(){},qa:function(){},na:function(){},Aa:function(){},oa:function(){},Ha:function(){},Ja:function(){Ee("To use dlopen, you need enable dynamic linking, see https://github.com/emscripten-core/emscripten/wiki/Linking")},Ia:function(){Ee("To use dlopen, you need enable dynamic linking, see https://github.com/emscripten-core/emscripten/wiki/Linking")},S:function(){return Date.now()},Ca:function(){return!0},Da:function(P,M){P=new Date(1e3*(H[P>>>2]+4294967296*F[P+4>>>2])),F[M>>2>>>0]=P.getUTCSeconds(),F[M+4>>2>>>0]=P.getUTCMinutes(),F[M+8>>2>>>0]=P.getUTCHours(),F[M+12>>2>>>0]=P.getUTCDate(),F[M+16>>2>>>0]=P.getUTCMonth(),F[M+20>>2>>>0]=P.getUTCFullYear()-1900,F[M+24>>2>>>0]=P.getUTCDay(),F[M+28>>2>>>0]=(P.getTime()-Date.UTC(P.getUTCFullYear(),0,1,0,0,0,0))/864e5|0},Ea:function(P,M){P=new Date(1e3*(H[P>>>2]+4294967296*F[P+4>>>2])),F[M>>2>>>0]=P.getSeconds(),F[M+4>>2>>>0]=P.getMinutes(),F[M+8>>2>>>0]=P.getHours(),F[M+12>>2>>>0]=P.getDate(),F[M+16>>2>>>0]=P.getMonth(),F[M+20>>2>>>0]=P.getFullYear()-1900,F[M+24>>2>>>0]=P.getDay();var $=new Date(P.getFullYear(),0,1);F[M+28>>2>>>0]=(P.getTime()-$.getTime())/864e5|0,F[M+36>>2>>>0]=-60*P.getTimezoneOffset();var R=new Date(P.getFullYear(),6,1).getTimezoneOffset();$=$.getTimezoneOffset(),F[M+32>>2>>>0]=0|(R!=$&&P.getTimezoneOffset()==Math.min($,R))},Fa:function(P){var M=new Date(F[P+20>>2>>>0]+1900,F[P+16>>2>>>0],F[P+12>>2>>>0],F[P+8>>2>>>0],F[P+4>>2>>>0],F[P>>2>>>0],0),$=F[P+32>>2>>>0],R=M.getTimezoneOffset(),U=new Date(M.getFullYear(),0,1),W=new Date(M.getFullYear(),6,1).getTimezoneOffset(),Y=U.getTimezoneOffset(),te=Math.min(Y,W);return 0>$?F[P+32>>2>>>0]=+(W!=Y&&te==R):0<$!=(te==R)&&(W=Math.max(Y,W),M.setTime(M.getTime()+6e4*((0<$?te:W)-R))),F[P+24>>2>>>0]=M.getDay(),F[P+28>>2>>>0]=(M.getTime()-U.getTime())/864e5|0,F[P>>2>>>0]=M.getSeconds(),F[P+4>>2>>>0]=M.getMinutes(),F[P+8>>2>>>0]=M.getHours(),F[P+12>>2>>>0]=M.getDate(),F[P+16>>2>>>0]=M.getMonth(),M.getTime()/1e3|0},sa:function(){return-52},ta:function(){},Ga:function P(M,$,R){P.Vb||(P.Vb=!0,function(U,W,Y){function te($e){return($e=$e.toTimeString().match(/\(([A-Za-z ]+)\)$/))?$e[1]:"GMT"}var Q=new Date().getFullYear(),le=new Date(Q,0,1),Te=new Date(Q,6,1);Q=le.getTimezoneOffset();var Le=Te.getTimezoneOffset();F[U>>2>>>0]=60*Math.max(Q,Le),F[W>>2>>>0]=+(Q!=Le),U=te(le),W=te(Te),U=pt(U),W=pt(W),Le<Q?(H[Y>>2>>>0]=U,H[Y+4>>2>>>0]=W):(H[Y>>2>>>0]=W,H[Y+4>>2>>>0]=U)}(M,$,R))},B:function(){Ee("")},ma:function(){return 4294901760},I:v?()=>{var P=process.hrtime();return 1e3*P[0]+P[1]/1e6}:()=>performance.now(),xa:function(P,M,$){L.copyWithin(P>>>0,M>>>0,M+$>>>0)},G:function(P){var M=L.length;if(4294901760<(P>>>=0))return!1;for(var $=1;4>=$;$*=2){var R=M*(1+.2/$);R=Math.min(R,P+100663296);var U=Math;R=Math.max(P,R),U=U.min.call(U,4294901760,R+(65536-R%65536)%65536);e:{try{I.grow(U-N.byteLength+65535>>>16),ue();var W=1;break e}catch{}W=void 0}if(W)return!0}return!1},va:function(P,M){var $=0;return Pt().forEach(function(R,U){var W=M+$;for(U=H[P+4*U>>2>>>0]=W,W=0;W<R.length;++W)B[U++>>0>>>0]=R.charCodeAt(W);B[U>>0>>>0]=0,$+=R.length+1}),0},wa:function(P,M){var $=Pt();H[P>>2>>>0]=$.length;var R=0;return $.forEach(function(U){R+=U.length+1}),H[M>>2>>>0]=R,0},ba:function(P){x||0<ke||(st(),Xe(_e),ft(0),ct[1].length&&dt(1,10),ct[2].length&&dt(2,10)),x||0<ke||(s.onExit&&s.onExit(P),D=!0),m(P,new Ve(P))},E:function(){return 52},Q:function(){return 52},ca:function(){return 70},P:function(P,M,$,R){for(var U=0,W=0;W<$;W++){var Y=H[M>>2>>>0],te=H[M+4>>2>>>0];M+=8;for(var Q=0;Q<te;Q++)dt(P,L[Y+Q>>>0]);U+=te}return H[R>>2>>>0]=U,0},c:function(){return Re},ja:function P(M,$){P.Mb||(P.Mb=function(){if(typeof crypto=="object"&&typeof crypto.getRandomValues=="function"){var U=new Uint8Array(1);return()=>(crypto.getRandomValues(U),U[0])}if(v)try{var W=a(Object(function(){var Y=new Error("Cannot find module 'crypto'");throw Y.code="MODULE_NOT_FOUND",Y}()));return()=>W.randomBytes(1)[0]}catch{}return()=>Ee("randomDevice")}());for(var R=0;R<$;R++)B[M+R>>0>>>0]=P.Mb();return 0},ea:function(P,M,$){var R=ie();try{return fe(P)(M,$)}catch(U){if(se(R),U!==U+0)throw U;ae(1,0)}},fa:function(P,M,$){var R=ie();try{return fe(P)(M,$)}catch(U){if(se(R),U!==U+0)throw U;ae(1,0)}},J:function(P){var M=ie();try{return fe(P)()}catch($){if(se(M),$!==$+0)throw $;ae(1,0)}},e:function(P,M){var $=ie();try{return fe(P)(M)}catch(R){if(se($),R!==R+0)throw R;ae(1,0)}},N:function(P,M,$){var R=ie();try{return fe(P)(M,$)}catch(U){if(se(R),U!==U+0)throw U;ae(1,0)}},O:function(P,M,$){var R=ie();try{return fe(P)(M,$)}catch(U){if(se(R),U!==U+0)throw U;ae(1,0)}},j:function(P,M,$){var R=ie();try{return fe(P)(M,$)}catch(U){if(se(R),U!==U+0)throw U;ae(1,0)}},o:function(P,M,$,R){var U=ie();try{return fe(P)(M,$,R)}catch(W){if(se(U),W!==W+0)throw W;ae(1,0)}},p:function(P,M,$,R,U){var W=ie();try{return fe(P)(M,$,R,U)}catch(Y){if(se(W),Y!==Y+0)throw Y;ae(1,0)}},M:function(P,M,$,R,U,W){var Y=ie();try{return fe(P)(M,$,R,U,W)}catch(te){if(se(Y),te!==te+0)throw te;ae(1,0)}},r:function(P,M,$,R,U,W){var Y=ie();try{return fe(P)(M,$,R,U,W)}catch(te){if(se(Y),te!==te+0)throw te;ae(1,0)}},v:function(P,M,$,R,U,W,Y){var te=ie();try{return fe(P)(M,$,R,U,W,Y)}catch(Q){if(se(te),Q!==Q+0)throw Q;ae(1,0)}},K:function(P,M,$,R,U,W,Y,te){var Q=ie();try{return fe(P)(M,$,R,U,W,Y,te)}catch(le){if(se(Q),le!==le+0)throw le;ae(1,0)}},D:function(P,M,$,R,U,W,Y,te,Q,le,Te,Le){var $e=ie();try{return fe(P)(M,$,R,U,W,Y,te,Q,le,Te,Le)}catch(V){if(se($e),V!==V+0)throw V;ae(1,0)}},X:function(P,M,$,R,U,W,Y,te){var Q=ie();try{return At(P,M,$,R,U,W,Y,te)}catch(le){if(se(Q),le!==le+0)throw le;ae(1,0)}},V:function(P,M,$,R,U,W,Y){var te=ie();try{return yt(P,M,$,R,U,W,Y)}catch(Q){if(se(te),Q!==Q+0)throw Q;ae(1,0)}},U:function(P,M,$,R,U){var W=ie();try{return Ot(P,M,$,R,U)}catch(Y){if(se(W),Y!==Y+0)throw Y;ae(1,0)}},Z:function(P,M,$,R){var U=ie();try{return Tt(P,M,$,R)}catch(W){if(se(U),W!==W+0)throw W;ae(1,0)}},W:function(P){var M=ie();try{return bt(P)}catch($){if(se(M),$!==$+0)throw $;ae(1,0)}},Y:function(P,M){var $=ie();try{return St(P,M)}catch(R){if(se($),R!==R+0)throw R;ae(1,0)}},T:function(P,M,$){var R=ie();try{return _t(P,M,$)}catch(U){if(se(R),U!==U+0)throw U;ae(1,0)}},f:function(P){var M=ie();try{fe(P)()}catch($){if(se(M),$!==$+0)throw $;ae(1,0)}},q:function(P,M){var $=ie();try{fe(P)(M)}catch(R){if(se($),R!==R+0)throw R;ae(1,0)}},h:function(P,M,$){var R=ie();try{fe(P)(M,$)}catch(U){if(se(R),U!==U+0)throw U;ae(1,0)}},da:function(P,M,$,R){var U=ie();try{fe(P)(M,$,R)}catch(W){if(se(U),W!==W+0)throw W;ae(1,0)}},l:function(P,M,$,R){var U=ie();try{fe(P)(M,$,R)}catch(W){if(se(U),W!==W+0)throw W;ae(1,0)}},t:function(P,M,$,R,U){var W=ie();try{fe(P)(M,$,R,U)}catch(Y){if(se(W),Y!==Y+0)throw Y;ae(1,0)}},u:function(P,M,$,R,U,W){var Y=ie();try{fe(P)(M,$,R,U,W)}catch(te){if(se(Y),te!==te+0)throw te;ae(1,0)}},x:function(P,M,$,R,U,W,Y){var te=ie();try{fe(P)(M,$,R,U,W,Y)}catch(Q){if(se(te),Q!==Q+0)throw Q;ae(1,0)}},z:function(P,M,$,R,U,W,Y,te){var Q=ie();try{fe(P)(M,$,R,U,W,Y,te)}catch(le){if(se(Q),le!==le+0)throw le;ae(1,0)}},ga:function(P,M,$,R,U,W,Y,te,Q){var le=ie();try{fe(P)(M,$,R,U,W,Y,te,Q)}catch(Te){if(se(le),Te!==Te+0)throw Te;ae(1,0)}},A:function(P,M,$,R,U,W,Y,te,Q,le,Te){var Le=ie();try{fe(P)(M,$,R,U,W,Y,te,Q,le,Te)}catch($e){if(se(Le),$e!==$e+0)throw $e;ae(1,0)}},C:function(P,M,$,R,U,W,Y,te,Q,le,Te,Le,$e,V,me,Pe){var We=ie();try{fe(P)(M,$,R,U,W,Y,te,Q,le,Te,Le,$e,V,me,Pe)}catch(et){if(se(We),et!==et+0)throw et;ae(1,0)}},aa:function(P,M,$,R,U,W,Y,te){var Q=ie();try{wt(P,M,$,R,U,W,Y,te)}catch(le){if(se(Q),le!==le+0)throw le;ae(1,0)}},_:function(P,M,$,R,U,W,Y,te,Q,le,Te,Le){var $e=ie();try{xt(P,M,$,R,U,W,Y,te,Q,le,Te,Le)}catch(V){if(se($e),V!==V+0)throw V;ae(1,0)}},$:function(P,M,$,R,U,W){var Y=ie();try{vt(P,M,$,R,U,W)}catch(te){if(se(Y),te!==te+0)throw te;ae(1,0)}},n:function(P){return P},F:function(P){Re=P},ha:It,y:function(P,M,$,R){return It(P,M,$,R)}};(function(){function P(U){s.asm=U.exports,I=s.asm.Ka,ue(),Ae=s.asm.ib,oe.unshift(s.asm.La),Ne--,s.monitorRunDependencies&&s.monitorRunDependencies(Ne),Ne==0&&Ce&&(U=Ce,Ce=null,U())}function M(U){P(U.instance)}function $(U){return function(){if(!S&&(b||_)){if(typeof fetch=="function"&&!xe.startsWith("file://"))return fetch(xe,{credentials:"same-origin"}).then(function(W){if(!W.ok)throw"failed to load wasm binary file at '"+xe+"'";return W.arrayBuffer()}).catch(function(){return Ge()});if(o)return new Promise(function(W,Y){o(xe,function(te){W(new Uint8Array(te))},Y)})}return Promise.resolve().then(function(){return Ge()})}().then(function(W){return WebAssembly.instantiate(W,R)}).then(function(W){return W}).then(U,function(W){O("failed to asynchronously prepare wasm: "+W),Ee(W)})}var R={a:kt};if(Ne++,s.monitorRunDependencies&&s.monitorRunDependencies(Ne),s.instantiateWasm)try{return s.instantiateWasm(R,P)}catch(U){return O("Module.instantiateWasm callback failed with error: "+U),!1}(S||typeof WebAssembly.instantiateStreaming!="function"||Oe()||xe.startsWith("file://")||v||typeof fetch!="function"?$(M):fetch(xe,{credentials:"same-origin"}).then(function(U){return WebAssembly.instantiateStreaming(U,R).then(M,function(W){return O("wasm streaming compile failed: "+W),O("falling back to ArrayBuffer instantiation"),$(M)})})).catch(f)})(),s.___wasm_call_ctors=function(){return(s.___wasm_call_ctors=s.asm.La).apply(null,arguments)},s._OrtInit=function(){return(s._OrtInit=s.asm.Ma).apply(null,arguments)},s._OrtCreateSessionOptions=function(){return(s._OrtCreateSessionOptions=s.asm.Na).apply(null,arguments)},s._OrtAppendExecutionProvider=function(){return(s._OrtAppendExecutionProvider=s.asm.Oa).apply(null,arguments)},s._OrtAddSessionConfigEntry=function(){return(s._OrtAddSessionConfigEntry=s.asm.Pa).apply(null,arguments)},s._OrtReleaseSessionOptions=function(){return(s._OrtReleaseSessionOptions=s.asm.Qa).apply(null,arguments)},s._OrtCreateSession=function(){return(s._OrtCreateSession=s.asm.Ra).apply(null,arguments)},s._OrtReleaseSession=function(){return(s._OrtReleaseSession=s.asm.Sa).apply(null,arguments)},s._OrtGetInputCount=function(){return(s._OrtGetInputCount=s.asm.Ta).apply(null,arguments)},s._OrtGetOutputCount=function(){return(s._OrtGetOutputCount=s.asm.Ua).apply(null,arguments)},s._OrtGetInputName=function(){return(s._OrtGetInputName=s.asm.Va).apply(null,arguments)},s._OrtGetOutputName=function(){return(s._OrtGetOutputName=s.asm.Wa).apply(null,arguments)},s._OrtFree=function(){return(s._OrtFree=s.asm.Xa).apply(null,arguments)},s._OrtCreateTensor=function(){return(s._OrtCreateTensor=s.asm.Ya).apply(null,arguments)},s._OrtGetTensorData=function(){return(s._OrtGetTensorData=s.asm.Za).apply(null,arguments)},s._OrtReleaseTensor=function(){return(s._OrtReleaseTensor=s.asm._a).apply(null,arguments)},s._OrtCreateRunOptions=function(){return(s._OrtCreateRunOptions=s.asm.$a).apply(null,arguments)},s._OrtAddRunConfigEntry=function(){return(s._OrtAddRunConfigEntry=s.asm.ab).apply(null,arguments)},s._OrtReleaseRunOptions=function(){return(s._OrtReleaseRunOptions=s.asm.bb).apply(null,arguments)},s._OrtRun=function(){return(s._OrtRun=s.asm.cb).apply(null,arguments)},s._OrtEndProfiling=function(){return(s._OrtEndProfiling=s.asm.db).apply(null,arguments)};var Je,we=s._malloc=function(){return(we=s._malloc=s.asm.eb).apply(null,arguments)},ot=s._free=function(){return(ot=s._free=s.asm.fb).apply(null,arguments)},ft=s._fflush=function(){return(ft=s._fflush=s.asm.gb).apply(null,arguments)},st=s.___funcs_on_exit=function(){return(st=s.___funcs_on_exit=s.asm.hb).apply(null,arguments)},ae=s._setThrew=function(){return(ae=s._setThrew=s.asm.jb).apply(null,arguments)},ie=s.stackSave=function(){return(ie=s.stackSave=s.asm.kb).apply(null,arguments)},se=s.stackRestore=function(){return(se=s.stackRestore=s.asm.lb).apply(null,arguments)},gt=s.stackAlloc=function(){return(gt=s.stackAlloc=s.asm.mb).apply(null,arguments)},at=s.___cxa_can_catch=function(){return(at=s.___cxa_can_catch=s.asm.nb).apply(null,arguments)},mt=s.___cxa_is_pointer_type=function(){return(mt=s.___cxa_is_pointer_type=s.asm.ob).apply(null,arguments)},bt=s.dynCall_j=function(){return(bt=s.dynCall_j=s.asm.pb).apply(null,arguments)},yt=s.dynCall_iiiiij=function(){return(yt=s.dynCall_iiiiij=s.asm.qb).apply(null,arguments)},_t=s.dynCall_jii=function(){return(_t=s.dynCall_jii=s.asm.rb).apply(null,arguments)},wt=s.dynCall_viiiiij=function(){return(wt=s.dynCall_viiiiij=s.asm.sb).apply(null,arguments)},vt=s.dynCall_vjji=function(){return(vt=s.dynCall_vjji=s.asm.tb).apply(null,arguments)},xt=s.dynCall_viiijjjii=function(){return(xt=s.dynCall_viiijjjii=s.asm.ub).apply(null,arguments)},Tt=s.dynCall_iij=function(){return(Tt=s.dynCall_iij=s.asm.vb).apply(null,arguments)},St=s.dynCall_ji=function(){return(St=s.dynCall_ji=s.asm.wb).apply(null,arguments)},At=s.dynCall_iiiiiij=function(){return(At=s.dynCall_iiiiiij=s.asm.xb).apply(null,arguments)},Ot=s.dynCall_iiij=function(){return(Ot=s.dynCall_iiij=s.asm.yb).apply(null,arguments)};function Et(){function P(){if(!Je&&(Je=!0,s.calledRun=!0,!D)){if(Xe(oe),h(s),s.onRuntimeInitialized&&s.onRuntimeInitialized(),s.postRun)for(typeof s.postRun=="function"&&(s.postRun=[s.postRun]);s.postRun.length;){var M=s.postRun.shift();be.unshift(M)}Xe(be)}}if(!(0<Ne)){if(s.preRun)for(typeof s.preRun=="function"&&(s.preRun=[s.preRun]);s.preRun.length;)Fe();Xe(ve),0<Ne||(s.setStatus?(s.setStatus("Running..."),setTimeout(function(){setTimeout(function(){s.setStatus("")},1),P()},1)):P())}}if(s.UTF8ToString=X,s.stringToUTF8=function(P,M,$){return J(P,L,M,$)},s.lengthBytesUTF8=ee,s.stackSave=ie,s.stackRestore=se,s.stackAlloc=gt,Ce=function P(){Je||Et(),Je||(Ce=P)},s.preInit)for(typeof s.preInit=="function"&&(s.preInit=[s.preInit]);0<s.preInit.length;)s.preInit.pop()();return Et(),p.ready});y.exports=c},4537:y=>{y.exports=function(n,a){for(var u=new Array(arguments.length-1),c=0,p=2,s=!0;p<arguments.length;)u[c++]=arguments[p++];return new Promise(function(h,f){u[c]=function(l){if(s)if(s=!1,l)f(l);else{for(var o=new Array(arguments.length-1),t=0;t<o.length;)o[t++]=arguments[t];h.apply(null,o)}};try{n.apply(a||null,u)}catch(l){s&&(s=!1,f(l))}})}},7419:(y,n)=>{var a=n;a.length=function(h){var f=h.length;if(!f)return 0;for(var l=0;--f%4>1&&h.charAt(f)==="=";)++l;return Math.ceil(3*h.length)/4-l};for(var u=new Array(64),c=new Array(123),p=0;p<64;)c[u[p]=p<26?p+65:p<52?p+71:p<62?p-4:p-59|43]=p++;a.encode=function(h,f,l){for(var o,t=null,e=[],r=0,i=0;f<l;){var d=h[f++];switch(i){case 0:e[r++]=u[d>>2],o=(3&d)<<4,i=1;break;case 1:e[r++]=u[o|d>>4],o=(15&d)<<2,i=2;break;case 2:e[r++]=u[o|d>>6],e[r++]=u[63&d],i=0}r>8191&&((t||(t=[])).push(String.fromCharCode.apply(String,e)),r=0)}return i&&(e[r++]=u[o],e[r++]=61,i===1&&(e[r++]=61)),t?(r&&t.push(String.fromCharCode.apply(String,e.slice(0,r))),t.join("")):String.fromCharCode.apply(String,e.slice(0,r))};var s="invalid encoding";a.decode=function(h,f,l){for(var o,t=l,e=0,r=0;r<h.length;){var i=h.charCodeAt(r++);if(i===61&&e>1)break;if((i=c[i])===void 0)throw Error(s);switch(e){case 0:o=i,e=1;break;case 1:f[l++]=o<<2|(48&i)>>4,o=i,e=2;break;case 2:f[l++]=(15&o)<<4|(60&i)>>2,o=i,e=3;break;case 3:f[l++]=(3&o)<<6|i,e=0}}if(e===1)throw Error(s);return l-t},a.test=function(h){return/^(?:[A-Za-z0-9+/]{4})*(?:[A-Za-z0-9+/]{2}==|[A-Za-z0-9+/]{3}=)?$/.test(h)}},9211:y=>{function n(){this._listeners={}}y.exports=n,n.prototype.on=function(a,u,c){return(this._listeners[a]||(this._listeners[a]=[])).push({fn:u,ctx:c||this}),this},n.prototype.off=function(a,u){if(a===void 0)this._listeners={};else if(u===void 0)this._listeners[a]=[];else for(var c=this._listeners[a],p=0;p<c.length;)c[p].fn===u?c.splice(p,1):++p;return this},n.prototype.emit=function(a){var u=this._listeners[a];if(u){for(var c=[],p=1;p<arguments.length;)c.push(arguments[p++]);for(p=0;p<u.length;)u[p].fn.apply(u[p++].ctx,c)}return this}},945:y=>{function n(s){return typeof Float32Array<"u"?function(){var h=new Float32Array([-0]),f=new Uint8Array(h.buffer),l=f[3]===128;function o(i,d,g){h[0]=i,d[g]=f[0],d[g+1]=f[1],d[g+2]=f[2],d[g+3]=f[3]}function t(i,d,g){h[0]=i,d[g]=f[3],d[g+1]=f[2],d[g+2]=f[1],d[g+3]=f[0]}function e(i,d){return f[0]=i[d],f[1]=i[d+1],f[2]=i[d+2],f[3]=i[d+3],h[0]}function r(i,d){return f[3]=i[d],f[2]=i[d+1],f[1]=i[d+2],f[0]=i[d+3],h[0]}s.writeFloatLE=l?o:t,s.writeFloatBE=l?t:o,s.readFloatLE=l?e:r,s.readFloatBE=l?r:e}():function(){function h(l,o,t,e){var r=o<0?1:0;if(r&&(o=-o),o===0)l(1/o>0?0:2147483648,t,e);else if(isNaN(o))l(2143289344,t,e);else if(o>34028234663852886e22)l((r<<31|2139095040)>>>0,t,e);else if(o<11754943508222875e-54)l((r<<31|Math.round(o/1401298464324817e-60))>>>0,t,e);else{var i=Math.floor(Math.log(o)/Math.LN2);l((r<<31|i+127<<23|8388607&Math.round(o*Math.pow(2,-i)*8388608))>>>0,t,e)}}function f(l,o,t){var e=l(o,t),r=2*(e>>31)+1,i=e>>>23&255,d=8388607&e;return i===255?d?NaN:r*(1/0):i===0?1401298464324817e-60*r*d:r*Math.pow(2,i-150)*(d+8388608)}s.writeFloatLE=h.bind(null,a),s.writeFloatBE=h.bind(null,u),s.readFloatLE=f.bind(null,c),s.readFloatBE=f.bind(null,p)}(),typeof Float64Array<"u"?function(){var h=new Float64Array([-0]),f=new Uint8Array(h.buffer),l=f[7]===128;function o(i,d,g){h[0]=i,d[g]=f[0],d[g+1]=f[1],d[g+2]=f[2],d[g+3]=f[3],d[g+4]=f[4],d[g+5]=f[5],d[g+6]=f[6],d[g+7]=f[7]}function t(i,d,g){h[0]=i,d[g]=f[7],d[g+1]=f[6],d[g+2]=f[5],d[g+3]=f[4],d[g+4]=f[3],d[g+5]=f[2],d[g+6]=f[1],d[g+7]=f[0]}function e(i,d){return f[0]=i[d],f[1]=i[d+1],f[2]=i[d+2],f[3]=i[d+3],f[4]=i[d+4],f[5]=i[d+5],f[6]=i[d+6],f[7]=i[d+7],h[0]}function r(i,d){return f[7]=i[d],f[6]=i[d+1],f[5]=i[d+2],f[4]=i[d+3],f[3]=i[d+4],f[2]=i[d+5],f[1]=i[d+6],f[0]=i[d+7],h[0]}s.writeDoubleLE=l?o:t,s.writeDoubleBE=l?t:o,s.readDoubleLE=l?e:r,s.readDoubleBE=l?r:e}():function(){function h(l,o,t,e,r,i){var d=e<0?1:0;if(d&&(e=-e),e===0)l(0,r,i+o),l(1/e>0?0:2147483648,r,i+t);else if(isNaN(e))l(0,r,i+o),l(2146959360,r,i+t);else if(e>17976931348623157e292)l(0,r,i+o),l((d<<31|2146435072)>>>0,r,i+t);else{var g;if(e<22250738585072014e-324)l((g=e/5e-324)>>>0,r,i+o),l((d<<31|g/4294967296)>>>0,r,i+t);else{var m=Math.floor(Math.log(e)/Math.LN2);m===1024&&(m=1023),l(4503599627370496*(g=e*Math.pow(2,-m))>>>0,r,i+o),l((d<<31|m+1023<<20|1048576*g&1048575)>>>0,r,i+t)}}}function f(l,o,t,e,r){var i=l(e,r+o),d=l(e,r+t),g=2*(d>>31)+1,m=d>>>20&2047,b=4294967296*(1048575&d)+i;return m===2047?b?NaN:g*(1/0):m===0?5e-324*g*b:g*Math.pow(2,m-1075)*(b+4503599627370496)}s.writeDoubleLE=h.bind(null,a,0,4),s.writeDoubleBE=h.bind(null,u,4,0),s.readDoubleLE=f.bind(null,c,0,4),s.readDoubleBE=f.bind(null,p,4,0)}(),s}function a(s,h,f){h[f]=255&s,h[f+1]=s>>>8&255,h[f+2]=s>>>16&255,h[f+3]=s>>>24}function u(s,h,f){h[f]=s>>>24,h[f+1]=s>>>16&255,h[f+2]=s>>>8&255,h[f+3]=255&s}function c(s,h){return(s[h]|s[h+1]<<8|s[h+2]<<16|s[h+3]<<24)>>>0}function p(s,h){return(s[h]<<24|s[h+1]<<16|s[h+2]<<8|s[h+3])>>>0}y.exports=n(n)},7199:module=>{function inquire(moduleName){try{var mod=eval("quire".replace(/^/,"re"))(moduleName);if(mod&&(mod.length||Object.keys(mod).length))return mod}catch(y){}return null}module.exports=inquire},6662:y=>{y.exports=function(n,a,u){var c=u||8192,p=c>>>1,s=null,h=c;return function(f){if(f<1||f>p)return n(f);h+f>c&&(s=n(c),h=0);var l=a.call(s,h,h+=f);return 7&h&&(h=1+(7|h)),l}}},4997:(y,n)=>{var a=n;a.length=function(u){for(var c=0,p=0,s=0;s<u.length;++s)(p=u.charCodeAt(s))<128?c+=1:p<2048?c+=2:(64512&p)==55296&&(64512&u.charCodeAt(s+1))==56320?(++s,c+=4):c+=3;return c},a.read=function(u,c,p){if(p-c<1)return"";for(var s,h=null,f=[],l=0;c<p;)(s=u[c++])<128?f[l++]=s:s>191&&s<224?f[l++]=(31&s)<<6|63&u[c++]:s>239&&s<365?(s=((7&s)<<18|(63&u[c++])<<12|(63&u[c++])<<6|63&u[c++])-65536,f[l++]=55296+(s>>10),f[l++]=56320+(1023&s)):f[l++]=(15&s)<<12|(63&u[c++])<<6|63&u[c++],l>8191&&((h||(h=[])).push(String.fromCharCode.apply(String,f)),l=0);return h?(l&&h.push(String.fromCharCode.apply(String,f.slice(0,l))),h.join("")):String.fromCharCode.apply(String,f.slice(0,l))},a.write=function(u,c,p){for(var s,h,f=p,l=0;l<u.length;++l)(s=u.charCodeAt(l))<128?c[p++]=s:s<2048?(c[p++]=s>>6|192,c[p++]=63&s|128):(64512&s)==55296&&(64512&(h=u.charCodeAt(l+1)))==56320?(s=65536+((1023&s)<<10)+(1023&h),++l,c[p++]=s>>18|240,c[p++]=s>>12&63|128,c[p++]=s>>6&63|128,c[p++]=63&s|128):(c[p++]=s>>12|224,c[p++]=s>>6&63|128,c[p++]=63&s|128);return p-f}},3442:(y,n)=>{n.__esModule=!0;var a=function(){function u(c){if(!c)throw new TypeError("Invalid argument; `value` has no value.");this.value=u.EMPTY,c&&u.isGuid(c)&&(this.value=c)}return u.isGuid=function(c){var p=c.toString();return c&&(c instanceof u||u.validator.test(p))},u.create=function(){return new u([u.gen(2),u.gen(1),u.gen(1),u.gen(1),u.gen(3)].join("-"))},u.createEmpty=function(){return new u("emptyguid")},u.parse=function(c){return new u(c)},u.raw=function(){return[u.gen(2),u.gen(1),u.gen(1),u.gen(1),u.gen(3)].join("-")},u.gen=function(c){for(var p="",s=0;s<c;s++)p+=(65536*(1+Math.random())|0).toString(16).substring(1);return p},u.prototype.equals=function(c){return u.isGuid(c)&&this.value===c.toString()},u.prototype.isEmpty=function(){return this.value===u.EMPTY},u.prototype.toString=function(){return this.value},u.prototype.toJSON=function(){return{value:this.value}},u.validator=new RegExp("^[a-z0-9]{8}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{12}$","i"),u.EMPTY="00000000-0000-0000-0000-000000000000",u}();n.Guid=a},3720:y=>{y.exports=a;var n=null;try{n=new WebAssembly.Instance(new WebAssembly.Module(new Uint8Array([0,97,115,109,1,0,0,0,1,13,2,96,0,1,127,96,4,127,127,127,127,1,127,3,7,6,0,1,1,1,1,1,6,6,1,127,1,65,0,11,7,50,6,3,109,117,108,0,1,5,100,105,118,95,115,0,2,5,100,105,118,95,117,0,3,5,114,101,109,95,115,0,4,5,114,101,109,95,117,0,5,8,103,101,116,95,104,105,103,104,0,0,10,191,1,6,4,0,35,0,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,126,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,127,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,128,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,129,34,4,66,32,135,167,36,0,32,4,167,11,36,1,1,126,32,0,173,32,1,173,66,32,134,132,32,2,173,32,3,173,66,32,134,132,130,34,4,66,32,135,167,36,0,32,4,167,11])),{}).exports}catch{}function a(x,I,N){this.low=0|x,this.high=0|I,this.unsigned=!!N}function u(x){return(x&&x.__isLong__)===!0}a.prototype.__isLong__,Object.defineProperty(a.prototype,"__isLong__",{value:!0}),a.isLong=u;var c={},p={};function s(x,I){var N,B,L;return I?(L=0<=(x>>>=0)&&x<256)&&(B=p[x])?B:(N=f(x,(0|x)<0?-1:0,!0),L&&(p[x]=N),N):(L=-128<=(x|=0)&&x<128)&&(B=c[x])?B:(N=f(x,x<0?-1:0,!1),L&&(c[x]=N),N)}function h(x,I){if(isNaN(x))return I?m:g;if(I){if(x<0)return m;if(x>=r)return S}else{if(x<=-i)return A;if(x+1>=i)return w}return x<0?h(-x,I).neg():f(x%e|0,x/e|0,I)}function f(x,I,N){return new a(x,I,N)}a.fromInt=s,a.fromNumber=h,a.fromBits=f;var l=Math.pow;function o(x,I,N){if(x.length===0)throw Error("empty string");if(x==="NaN"||x==="Infinity"||x==="+Infinity"||x==="-Infinity")return g;if(typeof I=="number"?(N=I,I=!1):I=!!I,(N=N||10)<2||36<N)throw RangeError("radix");var B;if((B=x.indexOf("-"))>0)throw Error("interior hyphen");if(B===0)return o(x.substring(1),I,N).neg();for(var L=h(l(N,8)),F=g,H=0;H<x.length;H+=8){var D=Math.min(8,x.length-H),j=parseInt(x.substring(H,H+D),N);if(D<8){var Z=h(l(N,D));F=F.mul(Z).add(h(j))}else F=(F=F.mul(L)).add(h(j))}return F.unsigned=I,F}function t(x,I){return typeof x=="number"?h(x,I):typeof x=="string"?o(x,I):f(x.low,x.high,typeof I=="boolean"?I:x.unsigned)}a.fromString=o,a.fromValue=t;var e=4294967296,r=e*e,i=r/2,d=s(1<<24),g=s(0);a.ZERO=g;var m=s(0,!0);a.UZERO=m;var b=s(1);a.ONE=b;var _=s(1,!0);a.UONE=_;var v=s(-1);a.NEG_ONE=v;var w=f(-1,2147483647,!1);a.MAX_VALUE=w;var S=f(-1,-1,!0);a.MAX_UNSIGNED_VALUE=S;var A=f(0,-2147483648,!1);a.MIN_VALUE=A;var O=a.prototype;O.toInt=function(){return this.unsigned?this.low>>>0:this.low},O.toNumber=function(){return this.unsigned?(this.high>>>0)*e+(this.low>>>0):this.high*e+(this.low>>>0)},O.toString=function(x){if((x=x||10)<2||36<x)throw RangeError("radix");if(this.isZero())return"0";if(this.isNegative()){if(this.eq(A)){var I=h(x),N=this.div(I),B=N.mul(I).sub(this);return N.toString(x)+B.toInt().toString(x)}return"-"+this.neg().toString(x)}for(var L=h(l(x,6),this.unsigned),F=this,H="";;){var D=F.div(L),j=(F.sub(D.mul(L)).toInt()>>>0).toString(x);if((F=D).isZero())return j+H;for(;j.length<6;)j="0"+j;H=""+j+H}},O.getHighBits=function(){return this.high},O.getHighBitsUnsigned=function(){return this.high>>>0},O.getLowBits=function(){return this.low},O.getLowBitsUnsigned=function(){return this.low>>>0},O.getNumBitsAbs=function(){if(this.isNegative())return this.eq(A)?64:this.neg().getNumBitsAbs();for(var x=this.high!=0?this.high:this.low,I=31;I>0&&!(x&1<<I);I--);return this.high!=0?I+33:I+1},O.isZero=function(){return this.high===0&&this.low===0},O.eqz=O.isZero,O.isNegative=function(){return!this.unsigned&&this.high<0},O.isPositive=function(){return this.unsigned||this.high>=0},O.isOdd=function(){return(1&this.low)==1},O.isEven=function(){return(1&this.low)==0},O.equals=function(x){return u(x)||(x=t(x)),(this.unsigned===x.unsigned||this.high>>>31!=1||x.high>>>31!=1)&&this.high===x.high&&this.low===x.low},O.eq=O.equals,O.notEquals=function(x){return!this.eq(x)},O.neq=O.notEquals,O.ne=O.notEquals,O.lessThan=function(x){return this.comp(x)<0},O.lt=O.lessThan,O.lessThanOrEqual=function(x){return this.comp(x)<=0},O.lte=O.lessThanOrEqual,O.le=O.lessThanOrEqual,O.greaterThan=function(x){return this.comp(x)>0},O.gt=O.greaterThan,O.greaterThanOrEqual=function(x){return this.comp(x)>=0},O.gte=O.greaterThanOrEqual,O.ge=O.greaterThanOrEqual,O.compare=function(x){if(u(x)||(x=t(x)),this.eq(x))return 0;var I=this.isNegative(),N=x.isNegative();return I&&!N?-1:!I&&N?1:this.unsigned?x.high>>>0>this.high>>>0||x.high===this.high&&x.low>>>0>this.low>>>0?-1:1:this.sub(x).isNegative()?-1:1},O.comp=O.compare,O.negate=function(){return!this.unsigned&&this.eq(A)?A:this.not().add(b)},O.neg=O.negate,O.add=function(x){u(x)||(x=t(x));var I=this.high>>>16,N=65535&this.high,B=this.low>>>16,L=65535&this.low,F=x.high>>>16,H=65535&x.high,D=x.low>>>16,j=0,Z=0,X=0,J=0;return X+=(J+=L+(65535&x.low))>>>16,Z+=(X+=B+D)>>>16,j+=(Z+=N+H)>>>16,j+=I+F,f((X&=65535)<<16|(J&=65535),(j&=65535)<<16|(Z&=65535),this.unsigned)},O.subtract=function(x){return u(x)||(x=t(x)),this.add(x.neg())},O.sub=O.subtract,O.multiply=function(x){if(this.isZero())return g;if(u(x)||(x=t(x)),n)return f(n.mul(this.low,this.high,x.low,x.high),n.get_high(),this.unsigned);if(x.isZero())return g;if(this.eq(A))return x.isOdd()?A:g;if(x.eq(A))return this.isOdd()?A:g;if(this.isNegative())return x.isNegative()?this.neg().mul(x.neg()):this.neg().mul(x).neg();if(x.isNegative())return this.mul(x.neg()).neg();if(this.lt(d)&&x.lt(d))return h(this.toNumber()*x.toNumber(),this.unsigned);var I=this.high>>>16,N=65535&this.high,B=this.low>>>16,L=65535&this.low,F=x.high>>>16,H=65535&x.high,D=x.low>>>16,j=65535&x.low,Z=0,X=0,J=0,ee=0;return J+=(ee+=L*j)>>>16,X+=(J+=B*j)>>>16,J&=65535,X+=(J+=L*D)>>>16,Z+=(X+=N*j)>>>16,X&=65535,Z+=(X+=B*D)>>>16,X&=65535,Z+=(X+=L*H)>>>16,Z+=I*j+N*D+B*H+L*F,f((J&=65535)<<16|(ee&=65535),(Z&=65535)<<16|(X&=65535),this.unsigned)},O.mul=O.multiply,O.divide=function(x){if(u(x)||(x=t(x)),x.isZero())throw Error("division by zero");var I,N,B;if(n)return this.unsigned||this.high!==-2147483648||x.low!==-1||x.high!==-1?f((this.unsigned?n.div_u:n.div_s)(this.low,this.high,x.low,x.high),n.get_high(),this.unsigned):this;if(this.isZero())return this.unsigned?m:g;if(this.unsigned){if(x.unsigned||(x=x.toUnsigned()),x.gt(this))return m;if(x.gt(this.shru(1)))return _;B=m}else{if(this.eq(A))return x.eq(b)||x.eq(v)?A:x.eq(A)?b:(I=this.shr(1).div(x).shl(1)).eq(g)?x.isNegative()?b:v:(N=this.sub(x.mul(I)),B=I.add(N.div(x)));if(x.eq(A))return this.unsigned?m:g;if(this.isNegative())return x.isNegative()?this.neg().div(x.neg()):this.neg().div(x).neg();if(x.isNegative())return this.div(x.neg()).neg();B=g}for(N=this;N.gte(x);){I=Math.max(1,Math.floor(N.toNumber()/x.toNumber()));for(var L=Math.ceil(Math.log(I)/Math.LN2),F=L<=48?1:l(2,L-48),H=h(I),D=H.mul(x);D.isNegative()||D.gt(N);)D=(H=h(I-=F,this.unsigned)).mul(x);H.isZero()&&(H=b),B=B.add(H),N=N.sub(D)}return B},O.div=O.divide,O.modulo=function(x){return u(x)||(x=t(x)),n?f((this.unsigned?n.rem_u:n.rem_s)(this.low,this.high,x.low,x.high),n.get_high(),this.unsigned):this.sub(this.div(x).mul(x))},O.mod=O.modulo,O.rem=O.modulo,O.not=function(){return f(~this.low,~this.high,this.unsigned)},O.and=function(x){return u(x)||(x=t(x)),f(this.low&x.low,this.high&x.high,this.unsigned)},O.or=function(x){return u(x)||(x=t(x)),f(this.low|x.low,this.high|x.high,this.unsigned)},O.xor=function(x){return u(x)||(x=t(x)),f(this.low^x.low,this.high^x.high,this.unsigned)},O.shiftLeft=function(x){return u(x)&&(x=x.toInt()),(x&=63)==0?this:x<32?f(this.low<<x,this.high<<x|this.low>>>32-x,this.unsigned):f(0,this.low<<x-32,this.unsigned)},O.shl=O.shiftLeft,O.shiftRight=function(x){return u(x)&&(x=x.toInt()),(x&=63)==0?this:x<32?f(this.low>>>x|this.high<<32-x,this.high>>x,this.unsigned):f(this.high>>x-32,this.high>=0?0:-1,this.unsigned)},O.shr=O.shiftRight,O.shiftRightUnsigned=function(x){if(u(x)&&(x=x.toInt()),(x&=63)==0)return this;var I=this.high;return x<32?f(this.low>>>x|I<<32-x,I>>>x,this.unsigned):f(x===32?I:I>>>x-32,0,this.unsigned)},O.shru=O.shiftRightUnsigned,O.shr_u=O.shiftRightUnsigned,O.toSigned=function(){return this.unsigned?f(this.low,this.high,!1):this},O.toUnsigned=function(){return this.unsigned?this:f(this.low,this.high,!0)},O.toBytes=function(x){return x?this.toBytesLE():this.toBytesBE()},O.toBytesLE=function(){var x=this.high,I=this.low;return[255&I,I>>>8&255,I>>>16&255,I>>>24,255&x,x>>>8&255,x>>>16&255,x>>>24]},O.toBytesBE=function(){var x=this.high,I=this.low;return[x>>>24,x>>>16&255,x>>>8&255,255&x,I>>>24,I>>>16&255,I>>>8&255,255&I]},a.fromBytes=function(x,I,N){return N?a.fromBytesLE(x,I):a.fromBytesBE(x,I)},a.fromBytesLE=function(x,I){return new a(x[0]|x[1]<<8|x[2]<<16|x[3]<<24,x[4]|x[5]<<8|x[6]<<16|x[7]<<24,I)},a.fromBytesBE=function(x,I){return new a(x[4]<<24|x[5]<<16|x[6]<<8|x[7],x[0]<<24|x[1]<<16|x[2]<<8|x[3],I)}},1446:(y,n,a)=>{var u,c,p,s=a(2100),h=s.Reader,f=s.Writer,l=s.util,o=s.roots.default||(s.roots.default={});o.onnx=((p={}).Version=(u={},(c=Object.create(u))[u[0]="_START_VERSION"]=0,c[u[1]="IR_VERSION_2017_10_10"]=1,c[u[2]="IR_VERSION_2017_10_30"]=2,c[u[3]="IR_VERSION_2017_11_3"]=3,c[u[4]="IR_VERSION_2019_1_22"]=4,c[u[5]="IR_VERSION"]=5,c),p.AttributeProto=function(){function t(e){if(this.floats=[],this.ints=[],this.strings=[],this.tensors=[],this.graphs=[],e)for(var r=Object.keys(e),i=0;i<r.length;++i)e[r[i]]!=null&&(this[r[i]]=e[r[i]])}return t.prototype.name="",t.prototype.refAttrName="",t.prototype.docString="",t.prototype.type=0,t.prototype.f=0,t.prototype.i=l.Long?l.Long.fromBits(0,0,!1):0,t.prototype.s=l.newBuffer([]),t.prototype.t=null,t.prototype.g=null,t.prototype.floats=l.emptyArray,t.prototype.ints=l.emptyArray,t.prototype.strings=l.emptyArray,t.prototype.tensors=l.emptyArray,t.prototype.graphs=l.emptyArray,t.create=function(e){return new t(e)},t.encode=function(e,r){if(r||(r=f.create()),e.name!=null&&e.hasOwnProperty("name")&&r.uint32(10).string(e.name),e.f!=null&&e.hasOwnProperty("f")&&r.uint32(21).float(e.f),e.i!=null&&e.hasOwnProperty("i")&&r.uint32(24).int64(e.i),e.s!=null&&e.hasOwnProperty("s")&&r.uint32(34).bytes(e.s),e.t!=null&&e.hasOwnProperty("t")&&o.onnx.TensorProto.encode(e.t,r.uint32(42).fork()).ldelim(),e.g!=null&&e.hasOwnProperty("g")&&o.onnx.GraphProto.encode(e.g,r.uint32(50).fork()).ldelim(),e.floats!=null&&e.floats.length){r.uint32(58).fork();for(var i=0;i<e.floats.length;++i)r.float(e.floats[i]);r.ldelim()}if(e.ints!=null&&e.ints.length){for(r.uint32(66).fork(),i=0;i<e.ints.length;++i)r.int64(e.ints[i]);r.ldelim()}if(e.strings!=null&&e.strings.length)for(i=0;i<e.strings.length;++i)r.uint32(74).bytes(e.strings[i]);if(e.tensors!=null&&e.tensors.length)for(i=0;i<e.tensors.length;++i)o.onnx.TensorProto.encode(e.tensors[i],r.uint32(82).fork()).ldelim();if(e.graphs!=null&&e.graphs.length)for(i=0;i<e.graphs.length;++i)o.onnx.GraphProto.encode(e.graphs[i],r.uint32(90).fork()).ldelim();return e.docString!=null&&e.hasOwnProperty("docString")&&r.uint32(106).string(e.docString),e.type!=null&&e.hasOwnProperty("type")&&r.uint32(160).int32(e.type),e.refAttrName!=null&&e.hasOwnProperty("refAttrName")&&r.uint32(170).string(e.refAttrName),r},t.encodeDelimited=function(e,r){return this.encode(e,r).ldelim()},t.decode=function(e,r){e instanceof h||(e=h.create(e));for(var i=r===void 0?e.len:e.pos+r,d=new o.onnx.AttributeProto;e.pos<i;){var g=e.uint32();switch(g>>>3){case 1:d.name=e.string();break;case 21:d.refAttrName=e.string();break;case 13:d.docString=e.string();break;case 20:d.type=e.int32();break;case 2:d.f=e.float();break;case 3:d.i=e.int64();break;case 4:d.s=e.bytes();break;case 5:d.t=o.onnx.TensorProto.decode(e,e.uint32());break;case 6:d.g=o.onnx.GraphProto.decode(e,e.uint32());break;case 7:if(d.floats&&d.floats.length||(d.floats=[]),(7&g)==2)for(var m=e.uint32()+e.pos;e.pos<m;)d.floats.push(e.float());else d.floats.push(e.float());break;case 8:if(d.ints&&d.ints.length||(d.ints=[]),(7&g)==2)for(m=e.uint32()+e.pos;e.pos<m;)d.ints.push(e.int64());else d.ints.push(e.int64());break;case 9:d.strings&&d.strings.length||(d.strings=[]),d.strings.push(e.bytes());break;case 10:d.tensors&&d.tensors.length||(d.tensors=[]),d.tensors.push(o.onnx.TensorProto.decode(e,e.uint32()));break;case 11:d.graphs&&d.graphs.length||(d.graphs=[]),d.graphs.push(o.onnx.GraphProto.decode(e,e.uint32()));break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.name!=null&&e.hasOwnProperty("name")&&!l.isString(e.name))return"name: string expected";if(e.refAttrName!=null&&e.hasOwnProperty("refAttrName")&&!l.isString(e.refAttrName))return"refAttrName: string expected";if(e.docString!=null&&e.hasOwnProperty("docString")&&!l.isString(e.docString))return"docString: string expected";if(e.type!=null&&e.hasOwnProperty("type"))switch(e.type){default:return"type: enum value expected";case 0:case 1:case 2:case 3:case 4:case 5:case 6:case 7:case 8:case 9:case 10:}if(e.f!=null&&e.hasOwnProperty("f")&&typeof e.f!="number")return"f: number expected";if(e.i!=null&&e.hasOwnProperty("i")&&!(l.isInteger(e.i)||e.i&&l.isInteger(e.i.low)&&l.isInteger(e.i.high)))return"i: integer|Long expected";if(e.s!=null&&e.hasOwnProperty("s")&&!(e.s&&typeof e.s.length=="number"||l.isString(e.s)))return"s: buffer expected";if(e.t!=null&&e.hasOwnProperty("t")&&(i=o.onnx.TensorProto.verify(e.t)))return"t."+i;if(e.g!=null&&e.hasOwnProperty("g")&&(i=o.onnx.GraphProto.verify(e.g)))return"g."+i;if(e.floats!=null&&e.hasOwnProperty("floats")){if(!Array.isArray(e.floats))return"floats: array expected";for(var r=0;r<e.floats.length;++r)if(typeof e.floats[r]!="number")return"floats: number[] expected"}if(e.ints!=null&&e.hasOwnProperty("ints")){if(!Array.isArray(e.ints))return"ints: array expected";for(r=0;r<e.ints.length;++r)if(!(l.isInteger(e.ints[r])||e.ints[r]&&l.isInteger(e.ints[r].low)&&l.isInteger(e.ints[r].high)))return"ints: integer|Long[] expected"}if(e.strings!=null&&e.hasOwnProperty("strings")){if(!Array.isArray(e.strings))return"strings: array expected";for(r=0;r<e.strings.length;++r)if(!(e.strings[r]&&typeof e.strings[r].length=="number"||l.isString(e.strings[r])))return"strings: buffer[] expected"}if(e.tensors!=null&&e.hasOwnProperty("tensors")){if(!Array.isArray(e.tensors))return"tensors: array expected";for(r=0;r<e.tensors.length;++r)if(i=o.onnx.TensorProto.verify(e.tensors[r]))return"tensors."+i}if(e.graphs!=null&&e.hasOwnProperty("graphs")){if(!Array.isArray(e.graphs))return"graphs: array expected";for(r=0;r<e.graphs.length;++r){var i;if(i=o.onnx.GraphProto.verify(e.graphs[r]))return"graphs."+i}}return null},t.fromObject=function(e){if(e instanceof o.onnx.AttributeProto)return e;var r=new o.onnx.AttributeProto;switch(e.name!=null&&(r.name=String(e.name)),e.refAttrName!=null&&(r.refAttrName=String(e.refAttrName)),e.docString!=null&&(r.docString=String(e.docString)),e.type){case"UNDEFINED":case 0:r.type=0;break;case"FLOAT":case 1:r.type=1;break;case"INT":case 2:r.type=2;break;case"STRING":case 3:r.type=3;break;case"TENSOR":case 4:r.type=4;break;case"GRAPH":case 5:r.type=5;break;case"FLOATS":case 6:r.type=6;break;case"INTS":case 7:r.type=7;break;case"STRINGS":case 8:r.type=8;break;case"TENSORS":case 9:r.type=9;break;case"GRAPHS":case 10:r.type=10}if(e.f!=null&&(r.f=Number(e.f)),e.i!=null&&(l.Long?(r.i=l.Long.fromValue(e.i)).unsigned=!1:typeof e.i=="string"?r.i=parseInt(e.i,10):typeof e.i=="number"?r.i=e.i:typeof e.i=="object"&&(r.i=new l.LongBits(e.i.low>>>0,e.i.high>>>0).toNumber())),e.s!=null&&(typeof e.s=="string"?l.base64.decode(e.s,r.s=l.newBuffer(l.base64.length(e.s)),0):e.s.length&&(r.s=e.s)),e.t!=null){if(typeof e.t!="object")throw TypeError(".onnx.AttributeProto.t: object expected");r.t=o.onnx.TensorProto.fromObject(e.t)}if(e.g!=null){if(typeof e.g!="object")throw TypeError(".onnx.AttributeProto.g: object expected");r.g=o.onnx.GraphProto.fromObject(e.g)}if(e.floats){if(!Array.isArray(e.floats))throw TypeError(".onnx.AttributeProto.floats: array expected");r.floats=[];for(var i=0;i<e.floats.length;++i)r.floats[i]=Number(e.floats[i])}if(e.ints){if(!Array.isArray(e.ints))throw TypeError(".onnx.AttributeProto.ints: array expected");for(r.ints=[],i=0;i<e.ints.length;++i)l.Long?(r.ints[i]=l.Long.fromValue(e.ints[i])).unsigned=!1:typeof e.ints[i]=="string"?r.ints[i]=parseInt(e.ints[i],10):typeof e.ints[i]=="number"?r.ints[i]=e.ints[i]:typeof e.ints[i]=="object"&&(r.ints[i]=new l.LongBits(e.ints[i].low>>>0,e.ints[i].high>>>0).toNumber())}if(e.strings){if(!Array.isArray(e.strings))throw TypeError(".onnx.AttributeProto.strings: array expected");for(r.strings=[],i=0;i<e.strings.length;++i)typeof e.strings[i]=="string"?l.base64.decode(e.strings[i],r.strings[i]=l.newBuffer(l.base64.length(e.strings[i])),0):e.strings[i].length&&(r.strings[i]=e.strings[i])}if(e.tensors){if(!Array.isArray(e.tensors))throw TypeError(".onnx.AttributeProto.tensors: array expected");for(r.tensors=[],i=0;i<e.tensors.length;++i){if(typeof e.tensors[i]!="object")throw TypeError(".onnx.AttributeProto.tensors: object expected");r.tensors[i]=o.onnx.TensorProto.fromObject(e.tensors[i])}}if(e.graphs){if(!Array.isArray(e.graphs))throw TypeError(".onnx.AttributeProto.graphs: array expected");for(r.graphs=[],i=0;i<e.graphs.length;++i){if(typeof e.graphs[i]!="object")throw TypeError(".onnx.AttributeProto.graphs: object expected");r.graphs[i]=o.onnx.GraphProto.fromObject(e.graphs[i])}}return r},t.toObject=function(e,r){r||(r={});var i={};if((r.arrays||r.defaults)&&(i.floats=[],i.ints=[],i.strings=[],i.tensors=[],i.graphs=[]),r.defaults){if(i.name="",i.f=0,l.Long){var d=new l.Long(0,0,!1);i.i=r.longs===String?d.toString():r.longs===Number?d.toNumber():d}else i.i=r.longs===String?"0":0;r.bytes===String?i.s="":(i.s=[],r.bytes!==Array&&(i.s=l.newBuffer(i.s))),i.t=null,i.g=null,i.docString="",i.type=r.enums===String?"UNDEFINED":0,i.refAttrName=""}if(e.name!=null&&e.hasOwnProperty("name")&&(i.name=e.name),e.f!=null&&e.hasOwnProperty("f")&&(i.f=r.json&&!isFinite(e.f)?String(e.f):e.f),e.i!=null&&e.hasOwnProperty("i")&&(typeof e.i=="number"?i.i=r.longs===String?String(e.i):e.i:i.i=r.longs===String?l.Long.prototype.toString.call(e.i):r.longs===Number?new l.LongBits(e.i.low>>>0,e.i.high>>>0).toNumber():e.i),e.s!=null&&e.hasOwnProperty("s")&&(i.s=r.bytes===String?l.base64.encode(e.s,0,e.s.length):r.bytes===Array?Array.prototype.slice.call(e.s):e.s),e.t!=null&&e.hasOwnProperty("t")&&(i.t=o.onnx.TensorProto.toObject(e.t,r)),e.g!=null&&e.hasOwnProperty("g")&&(i.g=o.onnx.GraphProto.toObject(e.g,r)),e.floats&&e.floats.length){i.floats=[];for(var g=0;g<e.floats.length;++g)i.floats[g]=r.json&&!isFinite(e.floats[g])?String(e.floats[g]):e.floats[g]}if(e.ints&&e.ints.length)for(i.ints=[],g=0;g<e.ints.length;++g)typeof e.ints[g]=="number"?i.ints[g]=r.longs===String?String(e.ints[g]):e.ints[g]:i.ints[g]=r.longs===String?l.Long.prototype.toString.call(e.ints[g]):r.longs===Number?new l.LongBits(e.ints[g].low>>>0,e.ints[g].high>>>0).toNumber():e.ints[g];if(e.strings&&e.strings.length)for(i.strings=[],g=0;g<e.strings.length;++g)i.strings[g]=r.bytes===String?l.base64.encode(e.strings[g],0,e.strings[g].length):r.bytes===Array?Array.prototype.slice.call(e.strings[g]):e.strings[g];if(e.tensors&&e.tensors.length)for(i.tensors=[],g=0;g<e.tensors.length;++g)i.tensors[g]=o.onnx.TensorProto.toObject(e.tensors[g],r);if(e.graphs&&e.graphs.length)for(i.graphs=[],g=0;g<e.graphs.length;++g)i.graphs[g]=o.onnx.GraphProto.toObject(e.graphs[g],r);return e.docString!=null&&e.hasOwnProperty("docString")&&(i.docString=e.docString),e.type!=null&&e.hasOwnProperty("type")&&(i.type=r.enums===String?o.onnx.AttributeProto.AttributeType[e.type]:e.type),e.refAttrName!=null&&e.hasOwnProperty("refAttrName")&&(i.refAttrName=e.refAttrName),i},t.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},t.AttributeType=function(){var e={},r=Object.create(e);return r[e[0]="UNDEFINED"]=0,r[e[1]="FLOAT"]=1,r[e[2]="INT"]=2,r[e[3]="STRING"]=3,r[e[4]="TENSOR"]=4,r[e[5]="GRAPH"]=5,r[e[6]="FLOATS"]=6,r[e[7]="INTS"]=7,r[e[8]="STRINGS"]=8,r[e[9]="TENSORS"]=9,r[e[10]="GRAPHS"]=10,r}(),t}(),p.ValueInfoProto=function(){function t(e){if(e)for(var r=Object.keys(e),i=0;i<r.length;++i)e[r[i]]!=null&&(this[r[i]]=e[r[i]])}return t.prototype.name="",t.prototype.type=null,t.prototype.docString="",t.create=function(e){return new t(e)},t.encode=function(e,r){return r||(r=f.create()),e.name!=null&&e.hasOwnProperty("name")&&r.uint32(10).string(e.name),e.type!=null&&e.hasOwnProperty("type")&&o.onnx.TypeProto.encode(e.type,r.uint32(18).fork()).ldelim(),e.docString!=null&&e.hasOwnProperty("docString")&&r.uint32(26).string(e.docString),r},t.encodeDelimited=function(e,r){return this.encode(e,r).ldelim()},t.decode=function(e,r){e instanceof h||(e=h.create(e));for(var i=r===void 0?e.len:e.pos+r,d=new o.onnx.ValueInfoProto;e.pos<i;){var g=e.uint32();switch(g>>>3){case 1:d.name=e.string();break;case 2:d.type=o.onnx.TypeProto.decode(e,e.uint32());break;case 3:d.docString=e.string();break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.name!=null&&e.hasOwnProperty("name")&&!l.isString(e.name))return"name: string expected";if(e.type!=null&&e.hasOwnProperty("type")){var r=o.onnx.TypeProto.verify(e.type);if(r)return"type."+r}return e.docString!=null&&e.hasOwnProperty("docString")&&!l.isString(e.docString)?"docString: string expected":null},t.fromObject=function(e){if(e instanceof o.onnx.ValueInfoProto)return e;var r=new o.onnx.ValueInfoProto;if(e.name!=null&&(r.name=String(e.name)),e.type!=null){if(typeof e.type!="object")throw TypeError(".onnx.ValueInfoProto.type: object expected");r.type=o.onnx.TypeProto.fromObject(e.type)}return e.docString!=null&&(r.docString=String(e.docString)),r},t.toObject=function(e,r){r||(r={});var i={};return r.defaults&&(i.name="",i.type=null,i.docString=""),e.name!=null&&e.hasOwnProperty("name")&&(i.name=e.name),e.type!=null&&e.hasOwnProperty("type")&&(i.type=o.onnx.TypeProto.toObject(e.type,r)),e.docString!=null&&e.hasOwnProperty("docString")&&(i.docString=e.docString),i},t.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},t}(),p.NodeProto=function(){function t(e){if(this.input=[],this.output=[],this.attribute=[],e)for(var r=Object.keys(e),i=0;i<r.length;++i)e[r[i]]!=null&&(this[r[i]]=e[r[i]])}return t.prototype.input=l.emptyArray,t.prototype.output=l.emptyArray,t.prototype.name="",t.prototype.opType="",t.prototype.domain="",t.prototype.attribute=l.emptyArray,t.prototype.docString="",t.create=function(e){return new t(e)},t.encode=function(e,r){if(r||(r=f.create()),e.input!=null&&e.input.length)for(var i=0;i<e.input.length;++i)r.uint32(10).string(e.input[i]);if(e.output!=null&&e.output.length)for(i=0;i<e.output.length;++i)r.uint32(18).string(e.output[i]);if(e.name!=null&&e.hasOwnProperty("name")&&r.uint32(26).string(e.name),e.opType!=null&&e.hasOwnProperty("opType")&&r.uint32(34).string(e.opType),e.attribute!=null&&e.attribute.length)for(i=0;i<e.attribute.length;++i)o.onnx.AttributeProto.encode(e.attribute[i],r.uint32(42).fork()).ldelim();return e.docString!=null&&e.hasOwnProperty("docString")&&r.uint32(50).string(e.docString),e.domain!=null&&e.hasOwnProperty("domain")&&r.uint32(58).string(e.domain),r},t.encodeDelimited=function(e,r){return this.encode(e,r).ldelim()},t.decode=function(e,r){e instanceof h||(e=h.create(e));for(var i=r===void 0?e.len:e.pos+r,d=new o.onnx.NodeProto;e.pos<i;){var g=e.uint32();switch(g>>>3){case 1:d.input&&d.input.length||(d.input=[]),d.input.push(e.string());break;case 2:d.output&&d.output.length||(d.output=[]),d.output.push(e.string());break;case 3:d.name=e.string();break;case 4:d.opType=e.string();break;case 7:d.domain=e.string();break;case 5:d.attribute&&d.attribute.length||(d.attribute=[]),d.attribute.push(o.onnx.AttributeProto.decode(e,e.uint32()));break;case 6:d.docString=e.string();break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.input!=null&&e.hasOwnProperty("input")){if(!Array.isArray(e.input))return"input: array expected";for(var r=0;r<e.input.length;++r)if(!l.isString(e.input[r]))return"input: string[] expected"}if(e.output!=null&&e.hasOwnProperty("output")){if(!Array.isArray(e.output))return"output: array expected";for(r=0;r<e.output.length;++r)if(!l.isString(e.output[r]))return"output: string[] expected"}if(e.name!=null&&e.hasOwnProperty("name")&&!l.isString(e.name))return"name: string expected";if(e.opType!=null&&e.hasOwnProperty("opType")&&!l.isString(e.opType))return"opType: string expected";if(e.domain!=null&&e.hasOwnProperty("domain")&&!l.isString(e.domain))return"domain: string expected";if(e.attribute!=null&&e.hasOwnProperty("attribute")){if(!Array.isArray(e.attribute))return"attribute: array expected";for(r=0;r<e.attribute.length;++r){var i=o.onnx.AttributeProto.verify(e.attribute[r]);if(i)return"attribute."+i}}return e.docString!=null&&e.hasOwnProperty("docString")&&!l.isString(e.docString)?"docString: string expected":null},t.fromObject=function(e){if(e instanceof o.onnx.NodeProto)return e;var r=new o.onnx.NodeProto;if(e.input){if(!Array.isArray(e.input))throw TypeError(".onnx.NodeProto.input: array expected");r.input=[];for(var i=0;i<e.input.length;++i)r.input[i]=String(e.input[i])}if(e.output){if(!Array.isArray(e.output))throw TypeError(".onnx.NodeProto.output: array expected");for(r.output=[],i=0;i<e.output.length;++i)r.output[i]=String(e.output[i])}if(e.name!=null&&(r.name=String(e.name)),e.opType!=null&&(r.opType=String(e.opType)),e.domain!=null&&(r.domain=String(e.domain)),e.attribute){if(!Array.isArray(e.attribute))throw TypeError(".onnx.NodeProto.attribute: array expected");for(r.attribute=[],i=0;i<e.attribute.length;++i){if(typeof e.attribute[i]!="object")throw TypeError(".onnx.NodeProto.attribute: object expected");r.attribute[i]=o.onnx.AttributeProto.fromObject(e.attribute[i])}}return e.docString!=null&&(r.docString=String(e.docString)),r},t.toObject=function(e,r){r||(r={});var i={};if((r.arrays||r.defaults)&&(i.input=[],i.output=[],i.attribute=[]),r.defaults&&(i.name="",i.opType="",i.docString="",i.domain=""),e.input&&e.input.length){i.input=[];for(var d=0;d<e.input.length;++d)i.input[d]=e.input[d]}if(e.output&&e.output.length)for(i.output=[],d=0;d<e.output.length;++d)i.output[d]=e.output[d];if(e.name!=null&&e.hasOwnProperty("name")&&(i.name=e.name),e.opType!=null&&e.hasOwnProperty("opType")&&(i.opType=e.opType),e.attribute&&e.attribute.length)for(i.attribute=[],d=0;d<e.attribute.length;++d)i.attribute[d]=o.onnx.AttributeProto.toObject(e.attribute[d],r);return e.docString!=null&&e.hasOwnProperty("docString")&&(i.docString=e.docString),e.domain!=null&&e.hasOwnProperty("domain")&&(i.domain=e.domain),i},t.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},t}(),p.ModelProto=function(){function t(e){if(this.opsetImport=[],this.metadataProps=[],e)for(var r=Object.keys(e),i=0;i<r.length;++i)e[r[i]]!=null&&(this[r[i]]=e[r[i]])}return t.prototype.irVersion=l.Long?l.Long.fromBits(0,0,!1):0,t.prototype.opsetImport=l.emptyArray,t.prototype.producerName="",t.prototype.producerVersion="",t.prototype.domain="",t.prototype.modelVersion=l.Long?l.Long.fromBits(0,0,!1):0,t.prototype.docString="",t.prototype.graph=null,t.prototype.metadataProps=l.emptyArray,t.create=function(e){return new t(e)},t.encode=function(e,r){if(r||(r=f.create()),e.irVersion!=null&&e.hasOwnProperty("irVersion")&&r.uint32(8).int64(e.irVersion),e.producerName!=null&&e.hasOwnProperty("producerName")&&r.uint32(18).string(e.producerName),e.producerVersion!=null&&e.hasOwnProperty("producerVersion")&&r.uint32(26).string(e.producerVersion),e.domain!=null&&e.hasOwnProperty("domain")&&r.uint32(34).string(e.domain),e.modelVersion!=null&&e.hasOwnProperty("modelVersion")&&r.uint32(40).int64(e.modelVersion),e.docString!=null&&e.hasOwnProperty("docString")&&r.uint32(50).string(e.docString),e.graph!=null&&e.hasOwnProperty("graph")&&o.onnx.GraphProto.encode(e.graph,r.uint32(58).fork()).ldelim(),e.opsetImport!=null&&e.opsetImport.length)for(var i=0;i<e.opsetImport.length;++i)o.onnx.OperatorSetIdProto.encode(e.opsetImport[i],r.uint32(66).fork()).ldelim();if(e.metadataProps!=null&&e.metadataProps.length)for(i=0;i<e.metadataProps.length;++i)o.onnx.StringStringEntryProto.encode(e.metadataProps[i],r.uint32(114).fork()).ldelim();return r},t.encodeDelimited=function(e,r){return this.encode(e,r).ldelim()},t.decode=function(e,r){e instanceof h||(e=h.create(e));for(var i=r===void 0?e.len:e.pos+r,d=new o.onnx.ModelProto;e.pos<i;){var g=e.uint32();switch(g>>>3){case 1:d.irVersion=e.int64();break;case 8:d.opsetImport&&d.opsetImport.length||(d.opsetImport=[]),d.opsetImport.push(o.onnx.OperatorSetIdProto.decode(e,e.uint32()));break;case 2:d.producerName=e.string();break;case 3:d.producerVersion=e.string();break;case 4:d.domain=e.string();break;case 5:d.modelVersion=e.int64();break;case 6:d.docString=e.string();break;case 7:d.graph=o.onnx.GraphProto.decode(e,e.uint32());break;case 14:d.metadataProps&&d.metadataProps.length||(d.metadataProps=[]),d.metadataProps.push(o.onnx.StringStringEntryProto.decode(e,e.uint32()));break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.irVersion!=null&&e.hasOwnProperty("irVersion")&&!(l.isInteger(e.irVersion)||e.irVersion&&l.isInteger(e.irVersion.low)&&l.isInteger(e.irVersion.high)))return"irVersion: integer|Long expected";if(e.opsetImport!=null&&e.hasOwnProperty("opsetImport")){if(!Array.isArray(e.opsetImport))return"opsetImport: array expected";for(var r=0;r<e.opsetImport.length;++r)if(i=o.onnx.OperatorSetIdProto.verify(e.opsetImport[r]))return"opsetImport."+i}if(e.producerName!=null&&e.hasOwnProperty("producerName")&&!l.isString(e.producerName))return"producerName: string expected";if(e.producerVersion!=null&&e.hasOwnProperty("producerVersion")&&!l.isString(e.producerVersion))return"producerVersion: string expected";if(e.domain!=null&&e.hasOwnProperty("domain")&&!l.isString(e.domain))return"domain: string expected";if(e.modelVersion!=null&&e.hasOwnProperty("modelVersion")&&!(l.isInteger(e.modelVersion)||e.modelVersion&&l.isInteger(e.modelVersion.low)&&l.isInteger(e.modelVersion.high)))return"modelVersion: integer|Long expected";if(e.docString!=null&&e.hasOwnProperty("docString")&&!l.isString(e.docString))return"docString: string expected";if(e.graph!=null&&e.hasOwnProperty("graph")&&(i=o.onnx.GraphProto.verify(e.graph)))return"graph."+i;if(e.metadataProps!=null&&e.hasOwnProperty("metadataProps")){if(!Array.isArray(e.metadataProps))return"metadataProps: array expected";for(r=0;r<e.metadataProps.length;++r){var i;if(i=o.onnx.StringStringEntryProto.verify(e.metadataProps[r]))return"metadataProps."+i}}return null},t.fromObject=function(e){if(e instanceof o.onnx.ModelProto)return e;var r=new o.onnx.ModelProto;if(e.irVersion!=null&&(l.Long?(r.irVersion=l.Long.fromValue(e.irVersion)).unsigned=!1:typeof e.irVersion=="string"?r.irVersion=parseInt(e.irVersion,10):typeof e.irVersion=="number"?r.irVersion=e.irVersion:typeof e.irVersion=="object"&&(r.irVersion=new l.LongBits(e.irVersion.low>>>0,e.irVersion.high>>>0).toNumber())),e.opsetImport){if(!Array.isArray(e.opsetImport))throw TypeError(".onnx.ModelProto.opsetImport: array expected");r.opsetImport=[];for(var i=0;i<e.opsetImport.length;++i){if(typeof e.opsetImport[i]!="object")throw TypeError(".onnx.ModelProto.opsetImport: object expected");r.opsetImport[i]=o.onnx.OperatorSetIdProto.fromObject(e.opsetImport[i])}}if(e.producerName!=null&&(r.producerName=String(e.producerName)),e.producerVersion!=null&&(r.producerVersion=String(e.producerVersion)),e.domain!=null&&(r.domain=String(e.domain)),e.modelVersion!=null&&(l.Long?(r.modelVersion=l.Long.fromValue(e.modelVersion)).unsigned=!1:typeof e.modelVersion=="string"?r.modelVersion=parseInt(e.modelVersion,10):typeof e.modelVersion=="number"?r.modelVersion=e.modelVersion:typeof e.modelVersion=="object"&&(r.modelVersion=new l.LongBits(e.modelVersion.low>>>0,e.modelVersion.high>>>0).toNumber())),e.docString!=null&&(r.docString=String(e.docString)),e.graph!=null){if(typeof e.graph!="object")throw TypeError(".onnx.ModelProto.graph: object expected");r.graph=o.onnx.GraphProto.fromObject(e.graph)}if(e.metadataProps){if(!Array.isArray(e.metadataProps))throw TypeError(".onnx.ModelProto.metadataProps: array expected");for(r.metadataProps=[],i=0;i<e.metadataProps.length;++i){if(typeof e.metadataProps[i]!="object")throw TypeError(".onnx.ModelProto.metadataProps: object expected");r.metadataProps[i]=o.onnx.StringStringEntryProto.fromObject(e.metadataProps[i])}}return r},t.toObject=function(e,r){r||(r={});var i={};if((r.arrays||r.defaults)&&(i.opsetImport=[],i.metadataProps=[]),r.defaults){if(l.Long){var d=new l.Long(0,0,!1);i.irVersion=r.longs===String?d.toString():r.longs===Number?d.toNumber():d}else i.irVersion=r.longs===String?"0":0;i.producerName="",i.producerVersion="",i.domain="",l.Long?(d=new l.Long(0,0,!1),i.modelVersion=r.longs===String?d.toString():r.longs===Number?d.toNumber():d):i.modelVersion=r.longs===String?"0":0,i.docString="",i.graph=null}if(e.irVersion!=null&&e.hasOwnProperty("irVersion")&&(typeof e.irVersion=="number"?i.irVersion=r.longs===String?String(e.irVersion):e.irVersion:i.irVersion=r.longs===String?l.Long.prototype.toString.call(e.irVersion):r.longs===Number?new l.LongBits(e.irVersion.low>>>0,e.irVersion.high>>>0).toNumber():e.irVersion),e.producerName!=null&&e.hasOwnProperty("producerName")&&(i.producerName=e.producerName),e.producerVersion!=null&&e.hasOwnProperty("producerVersion")&&(i.producerVersion=e.producerVersion),e.domain!=null&&e.hasOwnProperty("domain")&&(i.domain=e.domain),e.modelVersion!=null&&e.hasOwnProperty("modelVersion")&&(typeof e.modelVersion=="number"?i.modelVersion=r.longs===String?String(e.modelVersion):e.modelVersion:i.modelVersion=r.longs===String?l.Long.prototype.toString.call(e.modelVersion):r.longs===Number?new l.LongBits(e.modelVersion.low>>>0,e.modelVersion.high>>>0).toNumber():e.modelVersion),e.docString!=null&&e.hasOwnProperty("docString")&&(i.docString=e.docString),e.graph!=null&&e.hasOwnProperty("graph")&&(i.graph=o.onnx.GraphProto.toObject(e.graph,r)),e.opsetImport&&e.opsetImport.length){i.opsetImport=[];for(var g=0;g<e.opsetImport.length;++g)i.opsetImport[g]=o.onnx.OperatorSetIdProto.toObject(e.opsetImport[g],r)}if(e.metadataProps&&e.metadataProps.length)for(i.metadataProps=[],g=0;g<e.metadataProps.length;++g)i.metadataProps[g]=o.onnx.StringStringEntryProto.toObject(e.metadataProps[g],r);return i},t.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},t}(),p.StringStringEntryProto=function(){function t(e){if(e)for(var r=Object.keys(e),i=0;i<r.length;++i)e[r[i]]!=null&&(this[r[i]]=e[r[i]])}return t.prototype.key="",t.prototype.value="",t.create=function(e){return new t(e)},t.encode=function(e,r){return r||(r=f.create()),e.key!=null&&e.hasOwnProperty("key")&&r.uint32(10).string(e.key),e.value!=null&&e.hasOwnProperty("value")&&r.uint32(18).string(e.value),r},t.encodeDelimited=function(e,r){return this.encode(e,r).ldelim()},t.decode=function(e,r){e instanceof h||(e=h.create(e));for(var i=r===void 0?e.len:e.pos+r,d=new o.onnx.StringStringEntryProto;e.pos<i;){var g=e.uint32();switch(g>>>3){case 1:d.key=e.string();break;case 2:d.value=e.string();break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){return typeof e!="object"||e===null?"object expected":e.key!=null&&e.hasOwnProperty("key")&&!l.isString(e.key)?"key: string expected":e.value!=null&&e.hasOwnProperty("value")&&!l.isString(e.value)?"value: string expected":null},t.fromObject=function(e){if(e instanceof o.onnx.StringStringEntryProto)return e;var r=new o.onnx.StringStringEntryProto;return e.key!=null&&(r.key=String(e.key)),e.value!=null&&(r.value=String(e.value)),r},t.toObject=function(e,r){r||(r={});var i={};return r.defaults&&(i.key="",i.value=""),e.key!=null&&e.hasOwnProperty("key")&&(i.key=e.key),e.value!=null&&e.hasOwnProperty("value")&&(i.value=e.value),i},t.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},t}(),p.TensorAnnotation=function(){function t(e){if(this.quantParameterTensorNames=[],e)for(var r=Object.keys(e),i=0;i<r.length;++i)e[r[i]]!=null&&(this[r[i]]=e[r[i]])}return t.prototype.tensorName="",t.prototype.quantParameterTensorNames=l.emptyArray,t.create=function(e){return new t(e)},t.encode=function(e,r){if(r||(r=f.create()),e.tensorName!=null&&e.hasOwnProperty("tensorName")&&r.uint32(10).string(e.tensorName),e.quantParameterTensorNames!=null&&e.quantParameterTensorNames.length)for(var i=0;i<e.quantParameterTensorNames.length;++i)o.onnx.StringStringEntryProto.encode(e.quantParameterTensorNames[i],r.uint32(18).fork()).ldelim();return r},t.encodeDelimited=function(e,r){return this.encode(e,r).ldelim()},t.decode=function(e,r){e instanceof h||(e=h.create(e));for(var i=r===void 0?e.len:e.pos+r,d=new o.onnx.TensorAnnotation;e.pos<i;){var g=e.uint32();switch(g>>>3){case 1:d.tensorName=e.string();break;case 2:d.quantParameterTensorNames&&d.quantParameterTensorNames.length||(d.quantParameterTensorNames=[]),d.quantParameterTensorNames.push(o.onnx.StringStringEntryProto.decode(e,e.uint32()));break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.tensorName!=null&&e.hasOwnProperty("tensorName")&&!l.isString(e.tensorName))return"tensorName: string expected";if(e.quantParameterTensorNames!=null&&e.hasOwnProperty("quantParameterTensorNames")){if(!Array.isArray(e.quantParameterTensorNames))return"quantParameterTensorNames: array expected";for(var r=0;r<e.quantParameterTensorNames.length;++r){var i=o.onnx.StringStringEntryProto.verify(e.quantParameterTensorNames[r]);if(i)return"quantParameterTensorNames."+i}}return null},t.fromObject=function(e){if(e instanceof o.onnx.TensorAnnotation)return e;var r=new o.onnx.TensorAnnotation;if(e.tensorName!=null&&(r.tensorName=String(e.tensorName)),e.quantParameterTensorNames){if(!Array.isArray(e.quantParameterTensorNames))throw TypeError(".onnx.TensorAnnotation.quantParameterTensorNames: array expected");r.quantParameterTensorNames=[];for(var i=0;i<e.quantParameterTensorNames.length;++i){if(typeof e.quantParameterTensorNames[i]!="object")throw TypeError(".onnx.TensorAnnotation.quantParameterTensorNames: object expected");r.quantParameterTensorNames[i]=o.onnx.StringStringEntryProto.fromObject(e.quantParameterTensorNames[i])}}return r},t.toObject=function(e,r){r||(r={});var i={};if((r.arrays||r.defaults)&&(i.quantParameterTensorNames=[]),r.defaults&&(i.tensorName=""),e.tensorName!=null&&e.hasOwnProperty("tensorName")&&(i.tensorName=e.tensorName),e.quantParameterTensorNames&&e.quantParameterTensorNames.length){i.quantParameterTensorNames=[];for(var d=0;d<e.quantParameterTensorNames.length;++d)i.quantParameterTensorNames[d]=o.onnx.StringStringEntryProto.toObject(e.quantParameterTensorNames[d],r)}return i},t.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},t}(),p.GraphProto=function(){function t(e){if(this.node=[],this.initializer=[],this.input=[],this.output=[],this.valueInfo=[],this.quantizationAnnotation=[],e)for(var r=Object.keys(e),i=0;i<r.length;++i)e[r[i]]!=null&&(this[r[i]]=e[r[i]])}return t.prototype.node=l.emptyArray,t.prototype.name="",t.prototype.initializer=l.emptyArray,t.prototype.docString="",t.prototype.input=l.emptyArray,t.prototype.output=l.emptyArray,t.prototype.valueInfo=l.emptyArray,t.prototype.quantizationAnnotation=l.emptyArray,t.create=function(e){return new t(e)},t.encode=function(e,r){if(r||(r=f.create()),e.node!=null&&e.node.length)for(var i=0;i<e.node.length;++i)o.onnx.NodeProto.encode(e.node[i],r.uint32(10).fork()).ldelim();if(e.name!=null&&e.hasOwnProperty("name")&&r.uint32(18).string(e.name),e.initializer!=null&&e.initializer.length)for(i=0;i<e.initializer.length;++i)o.onnx.TensorProto.encode(e.initializer[i],r.uint32(42).fork()).ldelim();if(e.docString!=null&&e.hasOwnProperty("docString")&&r.uint32(82).string(e.docString),e.input!=null&&e.input.length)for(i=0;i<e.input.length;++i)o.onnx.ValueInfoProto.encode(e.input[i],r.uint32(90).fork()).ldelim();if(e.output!=null&&e.output.length)for(i=0;i<e.output.length;++i)o.onnx.ValueInfoProto.encode(e.output[i],r.uint32(98).fork()).ldelim();if(e.valueInfo!=null&&e.valueInfo.length)for(i=0;i<e.valueInfo.length;++i)o.onnx.ValueInfoProto.encode(e.valueInfo[i],r.uint32(106).fork()).ldelim();if(e.quantizationAnnotation!=null&&e.quantizationAnnotation.length)for(i=0;i<e.quantizationAnnotation.length;++i)o.onnx.TensorAnnotation.encode(e.quantizationAnnotation[i],r.uint32(114).fork()).ldelim();return r},t.encodeDelimited=function(e,r){return this.encode(e,r).ldelim()},t.decode=function(e,r){e instanceof h||(e=h.create(e));for(var i=r===void 0?e.len:e.pos+r,d=new o.onnx.GraphProto;e.pos<i;){var g=e.uint32();switch(g>>>3){case 1:d.node&&d.node.length||(d.node=[]),d.node.push(o.onnx.NodeProto.decode(e,e.uint32()));break;case 2:d.name=e.string();break;case 5:d.initializer&&d.initializer.length||(d.initializer=[]),d.initializer.push(o.onnx.TensorProto.decode(e,e.uint32()));break;case 10:d.docString=e.string();break;case 11:d.input&&d.input.length||(d.input=[]),d.input.push(o.onnx.ValueInfoProto.decode(e,e.uint32()));break;case 12:d.output&&d.output.length||(d.output=[]),d.output.push(o.onnx.ValueInfoProto.decode(e,e.uint32()));break;case 13:d.valueInfo&&d.valueInfo.length||(d.valueInfo=[]),d.valueInfo.push(o.onnx.ValueInfoProto.decode(e,e.uint32()));break;case 14:d.quantizationAnnotation&&d.quantizationAnnotation.length||(d.quantizationAnnotation=[]),d.quantizationAnnotation.push(o.onnx.TensorAnnotation.decode(e,e.uint32()));break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.node!=null&&e.hasOwnProperty("node")){if(!Array.isArray(e.node))return"node: array expected";for(var r=0;r<e.node.length;++r)if(i=o.onnx.NodeProto.verify(e.node[r]))return"node."+i}if(e.name!=null&&e.hasOwnProperty("name")&&!l.isString(e.name))return"name: string expected";if(e.initializer!=null&&e.hasOwnProperty("initializer")){if(!Array.isArray(e.initializer))return"initializer: array expected";for(r=0;r<e.initializer.length;++r)if(i=o.onnx.TensorProto.verify(e.initializer[r]))return"initializer."+i}if(e.docString!=null&&e.hasOwnProperty("docString")&&!l.isString(e.docString))return"docString: string expected";if(e.input!=null&&e.hasOwnProperty("input")){if(!Array.isArray(e.input))return"input: array expected";for(r=0;r<e.input.length;++r)if(i=o.onnx.ValueInfoProto.verify(e.input[r]))return"input."+i}if(e.output!=null&&e.hasOwnProperty("output")){if(!Array.isArray(e.output))return"output: array expected";for(r=0;r<e.output.length;++r)if(i=o.onnx.ValueInfoProto.verify(e.output[r]))return"output."+i}if(e.valueInfo!=null&&e.hasOwnProperty("valueInfo")){if(!Array.isArray(e.valueInfo))return"valueInfo: array expected";for(r=0;r<e.valueInfo.length;++r)if(i=o.onnx.ValueInfoProto.verify(e.valueInfo[r]))return"valueInfo."+i}if(e.quantizationAnnotation!=null&&e.hasOwnProperty("quantizationAnnotation")){if(!Array.isArray(e.quantizationAnnotation))return"quantizationAnnotation: array expected";for(r=0;r<e.quantizationAnnotation.length;++r){var i;if(i=o.onnx.TensorAnnotation.verify(e.quantizationAnnotation[r]))return"quantizationAnnotation."+i}}return null},t.fromObject=function(e){if(e instanceof o.onnx.GraphProto)return e;var r=new o.onnx.GraphProto;if(e.node){if(!Array.isArray(e.node))throw TypeError(".onnx.GraphProto.node: array expected");r.node=[];for(var i=0;i<e.node.length;++i){if(typeof e.node[i]!="object")throw TypeError(".onnx.GraphProto.node: object expected");r.node[i]=o.onnx.NodeProto.fromObject(e.node[i])}}if(e.name!=null&&(r.name=String(e.name)),e.initializer){if(!Array.isArray(e.initializer))throw TypeError(".onnx.GraphProto.initializer: array expected");for(r.initializer=[],i=0;i<e.initializer.length;++i){if(typeof e.initializer[i]!="object")throw TypeError(".onnx.GraphProto.initializer: object expected");r.initializer[i]=o.onnx.TensorProto.fromObject(e.initializer[i])}}if(e.docString!=null&&(r.docString=String(e.docString)),e.input){if(!Array.isArray(e.input))throw TypeError(".onnx.GraphProto.input: array expected");for(r.input=[],i=0;i<e.input.length;++i){if(typeof e.input[i]!="object")throw TypeError(".onnx.GraphProto.input: object expected");r.input[i]=o.onnx.ValueInfoProto.fromObject(e.input[i])}}if(e.output){if(!Array.isArray(e.output))throw TypeError(".onnx.GraphProto.output: array expected");for(r.output=[],i=0;i<e.output.length;++i){if(typeof e.output[i]!="object")throw TypeError(".onnx.GraphProto.output: object expected");r.output[i]=o.onnx.ValueInfoProto.fromObject(e.output[i])}}if(e.valueInfo){if(!Array.isArray(e.valueInfo))throw TypeError(".onnx.GraphProto.valueInfo: array expected");for(r.valueInfo=[],i=0;i<e.valueInfo.length;++i){if(typeof e.valueInfo[i]!="object")throw TypeError(".onnx.GraphProto.valueInfo: object expected");r.valueInfo[i]=o.onnx.ValueInfoProto.fromObject(e.valueInfo[i])}}if(e.quantizationAnnotation){if(!Array.isArray(e.quantizationAnnotation))throw TypeError(".onnx.GraphProto.quantizationAnnotation: array expected");for(r.quantizationAnnotation=[],i=0;i<e.quantizationAnnotation.length;++i){if(typeof e.quantizationAnnotation[i]!="object")throw TypeError(".onnx.GraphProto.quantizationAnnotation: object expected");r.quantizationAnnotation[i]=o.onnx.TensorAnnotation.fromObject(e.quantizationAnnotation[i])}}return r},t.toObject=function(e,r){r||(r={});var i={};if((r.arrays||r.defaults)&&(i.node=[],i.initializer=[],i.input=[],i.output=[],i.valueInfo=[],i.quantizationAnnotation=[]),r.defaults&&(i.name="",i.docString=""),e.node&&e.node.length){i.node=[];for(var d=0;d<e.node.length;++d)i.node[d]=o.onnx.NodeProto.toObject(e.node[d],r)}if(e.name!=null&&e.hasOwnProperty("name")&&(i.name=e.name),e.initializer&&e.initializer.length)for(i.initializer=[],d=0;d<e.initializer.length;++d)i.initializer[d]=o.onnx.TensorProto.toObject(e.initializer[d],r);if(e.docString!=null&&e.hasOwnProperty("docString")&&(i.docString=e.docString),e.input&&e.input.length)for(i.input=[],d=0;d<e.input.length;++d)i.input[d]=o.onnx.ValueInfoProto.toObject(e.input[d],r);if(e.output&&e.output.length)for(i.output=[],d=0;d<e.output.length;++d)i.output[d]=o.onnx.ValueInfoProto.toObject(e.output[d],r);if(e.valueInfo&&e.valueInfo.length)for(i.valueInfo=[],d=0;d<e.valueInfo.length;++d)i.valueInfo[d]=o.onnx.ValueInfoProto.toObject(e.valueInfo[d],r);if(e.quantizationAnnotation&&e.quantizationAnnotation.length)for(i.quantizationAnnotation=[],d=0;d<e.quantizationAnnotation.length;++d)i.quantizationAnnotation[d]=o.onnx.TensorAnnotation.toObject(e.quantizationAnnotation[d],r);return i},t.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},t}(),p.TensorProto=function(){function t(e){if(this.dims=[],this.floatData=[],this.int32Data=[],this.stringData=[],this.int64Data=[],this.externalData=[],this.doubleData=[],this.uint64Data=[],e)for(var r=Object.keys(e),i=0;i<r.length;++i)e[r[i]]!=null&&(this[r[i]]=e[r[i]])}return t.prototype.dims=l.emptyArray,t.prototype.dataType=0,t.prototype.segment=null,t.prototype.floatData=l.emptyArray,t.prototype.int32Data=l.emptyArray,t.prototype.stringData=l.emptyArray,t.prototype.int64Data=l.emptyArray,t.prototype.name="",t.prototype.docString="",t.prototype.rawData=l.newBuffer([]),t.prototype.externalData=l.emptyArray,t.prototype.dataLocation=0,t.prototype.doubleData=l.emptyArray,t.prototype.uint64Data=l.emptyArray,t.create=function(e){return new t(e)},t.encode=function(e,r){if(r||(r=f.create()),e.dims!=null&&e.dims.length){r.uint32(10).fork();for(var i=0;i<e.dims.length;++i)r.int64(e.dims[i]);r.ldelim()}if(e.dataType!=null&&e.hasOwnProperty("dataType")&&r.uint32(16).int32(e.dataType),e.segment!=null&&e.hasOwnProperty("segment")&&o.onnx.TensorProto.Segment.encode(e.segment,r.uint32(26).fork()).ldelim(),e.floatData!=null&&e.floatData.length){for(r.uint32(34).fork(),i=0;i<e.floatData.length;++i)r.float(e.floatData[i]);r.ldelim()}if(e.int32Data!=null&&e.int32Data.length){for(r.uint32(42).fork(),i=0;i<e.int32Data.length;++i)r.int32(e.int32Data[i]);r.ldelim()}if(e.stringData!=null&&e.stringData.length)for(i=0;i<e.stringData.length;++i)r.uint32(50).bytes(e.stringData[i]);if(e.int64Data!=null&&e.int64Data.length){for(r.uint32(58).fork(),i=0;i<e.int64Data.length;++i)r.int64(e.int64Data[i]);r.ldelim()}if(e.name!=null&&e.hasOwnProperty("name")&&r.uint32(66).string(e.name),e.rawData!=null&&e.hasOwnProperty("rawData")&&r.uint32(74).bytes(e.rawData),e.doubleData!=null&&e.doubleData.length){for(r.uint32(82).fork(),i=0;i<e.doubleData.length;++i)r.double(e.doubleData[i]);r.ldelim()}if(e.uint64Data!=null&&e.uint64Data.length){for(r.uint32(90).fork(),i=0;i<e.uint64Data.length;++i)r.uint64(e.uint64Data[i]);r.ldelim()}if(e.docString!=null&&e.hasOwnProperty("docString")&&r.uint32(98).string(e.docString),e.externalData!=null&&e.externalData.length)for(i=0;i<e.externalData.length;++i)o.onnx.StringStringEntryProto.encode(e.externalData[i],r.uint32(106).fork()).ldelim();return e.dataLocation!=null&&e.hasOwnProperty("dataLocation")&&r.uint32(112).int32(e.dataLocation),r},t.encodeDelimited=function(e,r){return this.encode(e,r).ldelim()},t.decode=function(e,r){e instanceof h||(e=h.create(e));for(var i=r===void 0?e.len:e.pos+r,d=new o.onnx.TensorProto;e.pos<i;){var g=e.uint32();switch(g>>>3){case 1:if(d.dims&&d.dims.length||(d.dims=[]),(7&g)==2)for(var m=e.uint32()+e.pos;e.pos<m;)d.dims.push(e.int64());else d.dims.push(e.int64());break;case 2:d.dataType=e.int32();break;case 3:d.segment=o.onnx.TensorProto.Segment.decode(e,e.uint32());break;case 4:if(d.floatData&&d.floatData.length||(d.floatData=[]),(7&g)==2)for(m=e.uint32()+e.pos;e.pos<m;)d.floatData.push(e.float());else d.floatData.push(e.float());break;case 5:if(d.int32Data&&d.int32Data.length||(d.int32Data=[]),(7&g)==2)for(m=e.uint32()+e.pos;e.pos<m;)d.int32Data.push(e.int32());else d.int32Data.push(e.int32());break;case 6:d.stringData&&d.stringData.length||(d.stringData=[]),d.stringData.push(e.bytes());break;case 7:if(d.int64Data&&d.int64Data.length||(d.int64Data=[]),(7&g)==2)for(m=e.uint32()+e.pos;e.pos<m;)d.int64Data.push(e.int64());else d.int64Data.push(e.int64());break;case 8:d.name=e.string();break;case 12:d.docString=e.string();break;case 9:d.rawData=e.bytes();break;case 13:d.externalData&&d.externalData.length||(d.externalData=[]),d.externalData.push(o.onnx.StringStringEntryProto.decode(e,e.uint32()));break;case 14:d.dataLocation=e.int32();break;case 10:if(d.doubleData&&d.doubleData.length||(d.doubleData=[]),(7&g)==2)for(m=e.uint32()+e.pos;e.pos<m;)d.doubleData.push(e.double());else d.doubleData.push(e.double());break;case 11:if(d.uint64Data&&d.uint64Data.length||(d.uint64Data=[]),(7&g)==2)for(m=e.uint32()+e.pos;e.pos<m;)d.uint64Data.push(e.uint64());else d.uint64Data.push(e.uint64());break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.dims!=null&&e.hasOwnProperty("dims")){if(!Array.isArray(e.dims))return"dims: array expected";for(var r=0;r<e.dims.length;++r)if(!(l.isInteger(e.dims[r])||e.dims[r]&&l.isInteger(e.dims[r].low)&&l.isInteger(e.dims[r].high)))return"dims: integer|Long[] expected"}if(e.dataType!=null&&e.hasOwnProperty("dataType")&&!l.isInteger(e.dataType))return"dataType: integer expected";if(e.segment!=null&&e.hasOwnProperty("segment")&&(i=o.onnx.TensorProto.Segment.verify(e.segment)))return"segment."+i;if(e.floatData!=null&&e.hasOwnProperty("floatData")){if(!Array.isArray(e.floatData))return"floatData: array expected";for(r=0;r<e.floatData.length;++r)if(typeof e.floatData[r]!="number")return"floatData: number[] expected"}if(e.int32Data!=null&&e.hasOwnProperty("int32Data")){if(!Array.isArray(e.int32Data))return"int32Data: array expected";for(r=0;r<e.int32Data.length;++r)if(!l.isInteger(e.int32Data[r]))return"int32Data: integer[] expected"}if(e.stringData!=null&&e.hasOwnProperty("stringData")){if(!Array.isArray(e.stringData))return"stringData: array expected";for(r=0;r<e.stringData.length;++r)if(!(e.stringData[r]&&typeof e.stringData[r].length=="number"||l.isString(e.stringData[r])))return"stringData: buffer[] expected"}if(e.int64Data!=null&&e.hasOwnProperty("int64Data")){if(!Array.isArray(e.int64Data))return"int64Data: array expected";for(r=0;r<e.int64Data.length;++r)if(!(l.isInteger(e.int64Data[r])||e.int64Data[r]&&l.isInteger(e.int64Data[r].low)&&l.isInteger(e.int64Data[r].high)))return"int64Data: integer|Long[] expected"}if(e.name!=null&&e.hasOwnProperty("name")&&!l.isString(e.name))return"name: string expected";if(e.docString!=null&&e.hasOwnProperty("docString")&&!l.isString(e.docString))return"docString: string expected";if(e.rawData!=null&&e.hasOwnProperty("rawData")&&!(e.rawData&&typeof e.rawData.length=="number"||l.isString(e.rawData)))return"rawData: buffer expected";if(e.externalData!=null&&e.hasOwnProperty("externalData")){if(!Array.isArray(e.externalData))return"externalData: array expected";for(r=0;r<e.externalData.length;++r){var i;if(i=o.onnx.StringStringEntryProto.verify(e.externalData[r]))return"externalData."+i}}if(e.dataLocation!=null&&e.hasOwnProperty("dataLocation"))switch(e.dataLocation){default:return"dataLocation: enum value expected";case 0:case 1:}if(e.doubleData!=null&&e.hasOwnProperty("doubleData")){if(!Array.isArray(e.doubleData))return"doubleData: array expected";for(r=0;r<e.doubleData.length;++r)if(typeof e.doubleData[r]!="number")return"doubleData: number[] expected"}if(e.uint64Data!=null&&e.hasOwnProperty("uint64Data")){if(!Array.isArray(e.uint64Data))return"uint64Data: array expected";for(r=0;r<e.uint64Data.length;++r)if(!(l.isInteger(e.uint64Data[r])||e.uint64Data[r]&&l.isInteger(e.uint64Data[r].low)&&l.isInteger(e.uint64Data[r].high)))return"uint64Data: integer|Long[] expected"}return null},t.fromObject=function(e){if(e instanceof o.onnx.TensorProto)return e;var r=new o.onnx.TensorProto;if(e.dims){if(!Array.isArray(e.dims))throw TypeError(".onnx.TensorProto.dims: array expected");r.dims=[];for(var i=0;i<e.dims.length;++i)l.Long?(r.dims[i]=l.Long.fromValue(e.dims[i])).unsigned=!1:typeof e.dims[i]=="string"?r.dims[i]=parseInt(e.dims[i],10):typeof e.dims[i]=="number"?r.dims[i]=e.dims[i]:typeof e.dims[i]=="object"&&(r.dims[i]=new l.LongBits(e.dims[i].low>>>0,e.dims[i].high>>>0).toNumber())}if(e.dataType!=null&&(r.dataType=0|e.dataType),e.segment!=null){if(typeof e.segment!="object")throw TypeError(".onnx.TensorProto.segment: object expected");r.segment=o.onnx.TensorProto.Segment.fromObject(e.segment)}if(e.floatData){if(!Array.isArray(e.floatData))throw TypeError(".onnx.TensorProto.floatData: array expected");for(r.floatData=[],i=0;i<e.floatData.length;++i)r.floatData[i]=Number(e.floatData[i])}if(e.int32Data){if(!Array.isArray(e.int32Data))throw TypeError(".onnx.TensorProto.int32Data: array expected");for(r.int32Data=[],i=0;i<e.int32Data.length;++i)r.int32Data[i]=0|e.int32Data[i]}if(e.stringData){if(!Array.isArray(e.stringData))throw TypeError(".onnx.TensorProto.stringData: array expected");for(r.stringData=[],i=0;i<e.stringData.length;++i)typeof e.stringData[i]=="string"?l.base64.decode(e.stringData[i],r.stringData[i]=l.newBuffer(l.base64.length(e.stringData[i])),0):e.stringData[i].length&&(r.stringData[i]=e.stringData[i])}if(e.int64Data){if(!Array.isArray(e.int64Data))throw TypeError(".onnx.TensorProto.int64Data: array expected");for(r.int64Data=[],i=0;i<e.int64Data.length;++i)l.Long?(r.int64Data[i]=l.Long.fromValue(e.int64Data[i])).unsigned=!1:typeof e.int64Data[i]=="string"?r.int64Data[i]=parseInt(e.int64Data[i],10):typeof e.int64Data[i]=="number"?r.int64Data[i]=e.int64Data[i]:typeof e.int64Data[i]=="object"&&(r.int64Data[i]=new l.LongBits(e.int64Data[i].low>>>0,e.int64Data[i].high>>>0).toNumber())}if(e.name!=null&&(r.name=String(e.name)),e.docString!=null&&(r.docString=String(e.docString)),e.rawData!=null&&(typeof e.rawData=="string"?l.base64.decode(e.rawData,r.rawData=l.newBuffer(l.base64.length(e.rawData)),0):e.rawData.length&&(r.rawData=e.rawData)),e.externalData){if(!Array.isArray(e.externalData))throw TypeError(".onnx.TensorProto.externalData: array expected");for(r.externalData=[],i=0;i<e.externalData.length;++i){if(typeof e.externalData[i]!="object")throw TypeError(".onnx.TensorProto.externalData: object expected");r.externalData[i]=o.onnx.StringStringEntryProto.fromObject(e.externalData[i])}}switch(e.dataLocation){case"DEFAULT":case 0:r.dataLocation=0;break;case"EXTERNAL":case 1:r.dataLocation=1}if(e.doubleData){if(!Array.isArray(e.doubleData))throw TypeError(".onnx.TensorProto.doubleData: array expected");for(r.doubleData=[],i=0;i<e.doubleData.length;++i)r.doubleData[i]=Number(e.doubleData[i])}if(e.uint64Data){if(!Array.isArray(e.uint64Data))throw TypeError(".onnx.TensorProto.uint64Data: array expected");for(r.uint64Data=[],i=0;i<e.uint64Data.length;++i)l.Long?(r.uint64Data[i]=l.Long.fromValue(e.uint64Data[i])).unsigned=!0:typeof e.uint64Data[i]=="string"?r.uint64Data[i]=parseInt(e.uint64Data[i],10):typeof e.uint64Data[i]=="number"?r.uint64Data[i]=e.uint64Data[i]:typeof e.uint64Data[i]=="object"&&(r.uint64Data[i]=new l.LongBits(e.uint64Data[i].low>>>0,e.uint64Data[i].high>>>0).toNumber(!0))}return r},t.toObject=function(e,r){r||(r={});var i={};if((r.arrays||r.defaults)&&(i.dims=[],i.floatData=[],i.int32Data=[],i.stringData=[],i.int64Data=[],i.doubleData=[],i.uint64Data=[],i.externalData=[]),r.defaults&&(i.dataType=0,i.segment=null,i.name="",r.bytes===String?i.rawData="":(i.rawData=[],r.bytes!==Array&&(i.rawData=l.newBuffer(i.rawData))),i.docString="",i.dataLocation=r.enums===String?"DEFAULT":0),e.dims&&e.dims.length){i.dims=[];for(var d=0;d<e.dims.length;++d)typeof e.dims[d]=="number"?i.dims[d]=r.longs===String?String(e.dims[d]):e.dims[d]:i.dims[d]=r.longs===String?l.Long.prototype.toString.call(e.dims[d]):r.longs===Number?new l.LongBits(e.dims[d].low>>>0,e.dims[d].high>>>0).toNumber():e.dims[d]}if(e.dataType!=null&&e.hasOwnProperty("dataType")&&(i.dataType=e.dataType),e.segment!=null&&e.hasOwnProperty("segment")&&(i.segment=o.onnx.TensorProto.Segment.toObject(e.segment,r)),e.floatData&&e.floatData.length)for(i.floatData=[],d=0;d<e.floatData.length;++d)i.floatData[d]=r.json&&!isFinite(e.floatData[d])?String(e.floatData[d]):e.floatData[d];if(e.int32Data&&e.int32Data.length)for(i.int32Data=[],d=0;d<e.int32Data.length;++d)i.int32Data[d]=e.int32Data[d];if(e.stringData&&e.stringData.length)for(i.stringData=[],d=0;d<e.stringData.length;++d)i.stringData[d]=r.bytes===String?l.base64.encode(e.stringData[d],0,e.stringData[d].length):r.bytes===Array?Array.prototype.slice.call(e.stringData[d]):e.stringData[d];if(e.int64Data&&e.int64Data.length)for(i.int64Data=[],d=0;d<e.int64Data.length;++d)typeof e.int64Data[d]=="number"?i.int64Data[d]=r.longs===String?String(e.int64Data[d]):e.int64Data[d]:i.int64Data[d]=r.longs===String?l.Long.prototype.toString.call(e.int64Data[d]):r.longs===Number?new l.LongBits(e.int64Data[d].low>>>0,e.int64Data[d].high>>>0).toNumber():e.int64Data[d];if(e.name!=null&&e.hasOwnProperty("name")&&(i.name=e.name),e.rawData!=null&&e.hasOwnProperty("rawData")&&(i.rawData=r.bytes===String?l.base64.encode(e.rawData,0,e.rawData.length):r.bytes===Array?Array.prototype.slice.call(e.rawData):e.rawData),e.doubleData&&e.doubleData.length)for(i.doubleData=[],d=0;d<e.doubleData.length;++d)i.doubleData[d]=r.json&&!isFinite(e.doubleData[d])?String(e.doubleData[d]):e.doubleData[d];if(e.uint64Data&&e.uint64Data.length)for(i.uint64Data=[],d=0;d<e.uint64Data.length;++d)typeof e.uint64Data[d]=="number"?i.uint64Data[d]=r.longs===String?String(e.uint64Data[d]):e.uint64Data[d]:i.uint64Data[d]=r.longs===String?l.Long.prototype.toString.call(e.uint64Data[d]):r.longs===Number?new l.LongBits(e.uint64Data[d].low>>>0,e.uint64Data[d].high>>>0).toNumber(!0):e.uint64Data[d];if(e.docString!=null&&e.hasOwnProperty("docString")&&(i.docString=e.docString),e.externalData&&e.externalData.length)for(i.externalData=[],d=0;d<e.externalData.length;++d)i.externalData[d]=o.onnx.StringStringEntryProto.toObject(e.externalData[d],r);return e.dataLocation!=null&&e.hasOwnProperty("dataLocation")&&(i.dataLocation=r.enums===String?o.onnx.TensorProto.DataLocation[e.dataLocation]:e.dataLocation),i},t.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},t.DataType=function(){var e={},r=Object.create(e);return r[e[0]="UNDEFINED"]=0,r[e[1]="FLOAT"]=1,r[e[2]="UINT8"]=2,r[e[3]="INT8"]=3,r[e[4]="UINT16"]=4,r[e[5]="INT16"]=5,r[e[6]="INT32"]=6,r[e[7]="INT64"]=7,r[e[8]="STRING"]=8,r[e[9]="BOOL"]=9,r[e[10]="FLOAT16"]=10,r[e[11]="DOUBLE"]=11,r[e[12]="UINT32"]=12,r[e[13]="UINT64"]=13,r[e[14]="COMPLEX64"]=14,r[e[15]="COMPLEX128"]=15,r[e[16]="BFLOAT16"]=16,r}(),t.Segment=function(){function e(r){if(r)for(var i=Object.keys(r),d=0;d<i.length;++d)r[i[d]]!=null&&(this[i[d]]=r[i[d]])}return e.prototype.begin=l.Long?l.Long.fromBits(0,0,!1):0,e.prototype.end=l.Long?l.Long.fromBits(0,0,!1):0,e.create=function(r){return new e(r)},e.encode=function(r,i){return i||(i=f.create()),r.begin!=null&&r.hasOwnProperty("begin")&&i.uint32(8).int64(r.begin),r.end!=null&&r.hasOwnProperty("end")&&i.uint32(16).int64(r.end),i},e.encodeDelimited=function(r,i){return this.encode(r,i).ldelim()},e.decode=function(r,i){r instanceof h||(r=h.create(r));for(var d=i===void 0?r.len:r.pos+i,g=new o.onnx.TensorProto.Segment;r.pos<d;){var m=r.uint32();switch(m>>>3){case 1:g.begin=r.int64();break;case 2:g.end=r.int64();break;default:r.skipType(7&m)}}return g},e.decodeDelimited=function(r){return r instanceof h||(r=new h(r)),this.decode(r,r.uint32())},e.verify=function(r){return typeof r!="object"||r===null?"object expected":r.begin!=null&&r.hasOwnProperty("begin")&&!(l.isInteger(r.begin)||r.begin&&l.isInteger(r.begin.low)&&l.isInteger(r.begin.high))?"begin: integer|Long expected":r.end!=null&&r.hasOwnProperty("end")&&!(l.isInteger(r.end)||r.end&&l.isInteger(r.end.low)&&l.isInteger(r.end.high))?"end: integer|Long expected":null},e.fromObject=function(r){if(r instanceof o.onnx.TensorProto.Segment)return r;var i=new o.onnx.TensorProto.Segment;return r.begin!=null&&(l.Long?(i.begin=l.Long.fromValue(r.begin)).unsigned=!1:typeof r.begin=="string"?i.begin=parseInt(r.begin,10):typeof r.begin=="number"?i.begin=r.begin:typeof r.begin=="object"&&(i.begin=new l.LongBits(r.begin.low>>>0,r.begin.high>>>0).toNumber())),r.end!=null&&(l.Long?(i.end=l.Long.fromValue(r.end)).unsigned=!1:typeof r.end=="string"?i.end=parseInt(r.end,10):typeof r.end=="number"?i.end=r.end:typeof r.end=="object"&&(i.end=new l.LongBits(r.end.low>>>0,r.end.high>>>0).toNumber())),i},e.toObject=function(r,i){i||(i={});var d={};if(i.defaults){if(l.Long){var g=new l.Long(0,0,!1);d.begin=i.longs===String?g.toString():i.longs===Number?g.toNumber():g}else d.begin=i.longs===String?"0":0;l.Long?(g=new l.Long(0,0,!1),d.end=i.longs===String?g.toString():i.longs===Number?g.toNumber():g):d.end=i.longs===String?"0":0}return r.begin!=null&&r.hasOwnProperty("begin")&&(typeof r.begin=="number"?d.begin=i.longs===String?String(r.begin):r.begin:d.begin=i.longs===String?l.Long.prototype.toString.call(r.begin):i.longs===Number?new l.LongBits(r.begin.low>>>0,r.begin.high>>>0).toNumber():r.begin),r.end!=null&&r.hasOwnProperty("end")&&(typeof r.end=="number"?d.end=i.longs===String?String(r.end):r.end:d.end=i.longs===String?l.Long.prototype.toString.call(r.end):i.longs===Number?new l.LongBits(r.end.low>>>0,r.end.high>>>0).toNumber():r.end),d},e.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},e}(),t.DataLocation=function(){var e={},r=Object.create(e);return r[e[0]="DEFAULT"]=0,r[e[1]="EXTERNAL"]=1,r}(),t}(),p.TensorShapeProto=function(){function t(e){if(this.dim=[],e)for(var r=Object.keys(e),i=0;i<r.length;++i)e[r[i]]!=null&&(this[r[i]]=e[r[i]])}return t.prototype.dim=l.emptyArray,t.create=function(e){return new t(e)},t.encode=function(e,r){if(r||(r=f.create()),e.dim!=null&&e.dim.length)for(var i=0;i<e.dim.length;++i)o.onnx.TensorShapeProto.Dimension.encode(e.dim[i],r.uint32(10).fork()).ldelim();return r},t.encodeDelimited=function(e,r){return this.encode(e,r).ldelim()},t.decode=function(e,r){e instanceof h||(e=h.create(e));for(var i=r===void 0?e.len:e.pos+r,d=new o.onnx.TensorShapeProto;e.pos<i;){var g=e.uint32();g>>>3==1?(d.dim&&d.dim.length||(d.dim=[]),d.dim.push(o.onnx.TensorShapeProto.Dimension.decode(e,e.uint32()))):e.skipType(7&g)}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){if(typeof e!="object"||e===null)return"object expected";if(e.dim!=null&&e.hasOwnProperty("dim")){if(!Array.isArray(e.dim))return"dim: array expected";for(var r=0;r<e.dim.length;++r){var i=o.onnx.TensorShapeProto.Dimension.verify(e.dim[r]);if(i)return"dim."+i}}return null},t.fromObject=function(e){if(e instanceof o.onnx.TensorShapeProto)return e;var r=new o.onnx.TensorShapeProto;if(e.dim){if(!Array.isArray(e.dim))throw TypeError(".onnx.TensorShapeProto.dim: array expected");r.dim=[];for(var i=0;i<e.dim.length;++i){if(typeof e.dim[i]!="object")throw TypeError(".onnx.TensorShapeProto.dim: object expected");r.dim[i]=o.onnx.TensorShapeProto.Dimension.fromObject(e.dim[i])}}return r},t.toObject=function(e,r){r||(r={});var i={};if((r.arrays||r.defaults)&&(i.dim=[]),e.dim&&e.dim.length){i.dim=[];for(var d=0;d<e.dim.length;++d)i.dim[d]=o.onnx.TensorShapeProto.Dimension.toObject(e.dim[d],r)}return i},t.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},t.Dimension=function(){function e(i){if(i)for(var d=Object.keys(i),g=0;g<d.length;++g)i[d[g]]!=null&&(this[d[g]]=i[d[g]])}var r;return e.prototype.dimValue=l.Long?l.Long.fromBits(0,0,!1):0,e.prototype.dimParam="",e.prototype.denotation="",Object.defineProperty(e.prototype,"value",{get:l.oneOfGetter(r=["dimValue","dimParam"]),set:l.oneOfSetter(r)}),e.create=function(i){return new e(i)},e.encode=function(i,d){return d||(d=f.create()),i.dimValue!=null&&i.hasOwnProperty("dimValue")&&d.uint32(8).int64(i.dimValue),i.dimParam!=null&&i.hasOwnProperty("dimParam")&&d.uint32(18).string(i.dimParam),i.denotation!=null&&i.hasOwnProperty("denotation")&&d.uint32(26).string(i.denotation),d},e.encodeDelimited=function(i,d){return this.encode(i,d).ldelim()},e.decode=function(i,d){i instanceof h||(i=h.create(i));for(var g=d===void 0?i.len:i.pos+d,m=new o.onnx.TensorShapeProto.Dimension;i.pos<g;){var b=i.uint32();switch(b>>>3){case 1:m.dimValue=i.int64();break;case 2:m.dimParam=i.string();break;case 3:m.denotation=i.string();break;default:i.skipType(7&b)}}return m},e.decodeDelimited=function(i){return i instanceof h||(i=new h(i)),this.decode(i,i.uint32())},e.verify=function(i){if(typeof i!="object"||i===null)return"object expected";var d={};if(i.dimValue!=null&&i.hasOwnProperty("dimValue")&&(d.value=1,!(l.isInteger(i.dimValue)||i.dimValue&&l.isInteger(i.dimValue.low)&&l.isInteger(i.dimValue.high))))return"dimValue: integer|Long expected";if(i.dimParam!=null&&i.hasOwnProperty("dimParam")){if(d.value===1)return"value: multiple values";if(d.value=1,!l.isString(i.dimParam))return"dimParam: string expected"}return i.denotation!=null&&i.hasOwnProperty("denotation")&&!l.isString(i.denotation)?"denotation: string expected":null},e.fromObject=function(i){if(i instanceof o.onnx.TensorShapeProto.Dimension)return i;var d=new o.onnx.TensorShapeProto.Dimension;return i.dimValue!=null&&(l.Long?(d.dimValue=l.Long.fromValue(i.dimValue)).unsigned=!1:typeof i.dimValue=="string"?d.dimValue=parseInt(i.dimValue,10):typeof i.dimValue=="number"?d.dimValue=i.dimValue:typeof i.dimValue=="object"&&(d.dimValue=new l.LongBits(i.dimValue.low>>>0,i.dimValue.high>>>0).toNumber())),i.dimParam!=null&&(d.dimParam=String(i.dimParam)),i.denotation!=null&&(d.denotation=String(i.denotation)),d},e.toObject=function(i,d){d||(d={});var g={};return d.defaults&&(g.denotation=""),i.dimValue!=null&&i.hasOwnProperty("dimValue")&&(typeof i.dimValue=="number"?g.dimValue=d.longs===String?String(i.dimValue):i.dimValue:g.dimValue=d.longs===String?l.Long.prototype.toString.call(i.dimValue):d.longs===Number?new l.LongBits(i.dimValue.low>>>0,i.dimValue.high>>>0).toNumber():i.dimValue,d.oneofs&&(g.value="dimValue")),i.dimParam!=null&&i.hasOwnProperty("dimParam")&&(g.dimParam=i.dimParam,d.oneofs&&(g.value="dimParam")),i.denotation!=null&&i.hasOwnProperty("denotation")&&(g.denotation=i.denotation),g},e.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},e}(),t}(),p.TypeProto=function(){function t(r){if(r)for(var i=Object.keys(r),d=0;d<i.length;++d)r[i[d]]!=null&&(this[i[d]]=r[i[d]])}var e;return t.prototype.tensorType=null,t.prototype.denotation="",Object.defineProperty(t.prototype,"value",{get:l.oneOfGetter(e=["tensorType"]),set:l.oneOfSetter(e)}),t.create=function(r){return new t(r)},t.encode=function(r,i){return i||(i=f.create()),r.tensorType!=null&&r.hasOwnProperty("tensorType")&&o.onnx.TypeProto.Tensor.encode(r.tensorType,i.uint32(10).fork()).ldelim(),r.denotation!=null&&r.hasOwnProperty("denotation")&&i.uint32(50).string(r.denotation),i},t.encodeDelimited=function(r,i){return this.encode(r,i).ldelim()},t.decode=function(r,i){r instanceof h||(r=h.create(r));for(var d=i===void 0?r.len:r.pos+i,g=new o.onnx.TypeProto;r.pos<d;){var m=r.uint32();switch(m>>>3){case 1:g.tensorType=o.onnx.TypeProto.Tensor.decode(r,r.uint32());break;case 6:g.denotation=r.string();break;default:r.skipType(7&m)}}return g},t.decodeDelimited=function(r){return r instanceof h||(r=new h(r)),this.decode(r,r.uint32())},t.verify=function(r){if(typeof r!="object"||r===null)return"object expected";if(r.tensorType!=null&&r.hasOwnProperty("tensorType")){var i=o.onnx.TypeProto.Tensor.verify(r.tensorType);if(i)return"tensorType."+i}return r.denotation!=null&&r.hasOwnProperty("denotation")&&!l.isString(r.denotation)?"denotation: string expected":null},t.fromObject=function(r){if(r instanceof o.onnx.TypeProto)return r;var i=new o.onnx.TypeProto;if(r.tensorType!=null){if(typeof r.tensorType!="object")throw TypeError(".onnx.TypeProto.tensorType: object expected");i.tensorType=o.onnx.TypeProto.Tensor.fromObject(r.tensorType)}return r.denotation!=null&&(i.denotation=String(r.denotation)),i},t.toObject=function(r,i){i||(i={});var d={};return i.defaults&&(d.denotation=""),r.tensorType!=null&&r.hasOwnProperty("tensorType")&&(d.tensorType=o.onnx.TypeProto.Tensor.toObject(r.tensorType,i),i.oneofs&&(d.value="tensorType")),r.denotation!=null&&r.hasOwnProperty("denotation")&&(d.denotation=r.denotation),d},t.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},t.Tensor=function(){function r(i){if(i)for(var d=Object.keys(i),g=0;g<d.length;++g)i[d[g]]!=null&&(this[d[g]]=i[d[g]])}return r.prototype.elemType=0,r.prototype.shape=null,r.create=function(i){return new r(i)},r.encode=function(i,d){return d||(d=f.create()),i.elemType!=null&&i.hasOwnProperty("elemType")&&d.uint32(8).int32(i.elemType),i.shape!=null&&i.hasOwnProperty("shape")&&o.onnx.TensorShapeProto.encode(i.shape,d.uint32(18).fork()).ldelim(),d},r.encodeDelimited=function(i,d){return this.encode(i,d).ldelim()},r.decode=function(i,d){i instanceof h||(i=h.create(i));for(var g=d===void 0?i.len:i.pos+d,m=new o.onnx.TypeProto.Tensor;i.pos<g;){var b=i.uint32();switch(b>>>3){case 1:m.elemType=i.int32();break;case 2:m.shape=o.onnx.TensorShapeProto.decode(i,i.uint32());break;default:i.skipType(7&b)}}return m},r.decodeDelimited=function(i){return i instanceof h||(i=new h(i)),this.decode(i,i.uint32())},r.verify=function(i){if(typeof i!="object"||i===null)return"object expected";if(i.elemType!=null&&i.hasOwnProperty("elemType")&&!l.isInteger(i.elemType))return"elemType: integer expected";if(i.shape!=null&&i.hasOwnProperty("shape")){var d=o.onnx.TensorShapeProto.verify(i.shape);if(d)return"shape."+d}return null},r.fromObject=function(i){if(i instanceof o.onnx.TypeProto.Tensor)return i;var d=new o.onnx.TypeProto.Tensor;if(i.elemType!=null&&(d.elemType=0|i.elemType),i.shape!=null){if(typeof i.shape!="object")throw TypeError(".onnx.TypeProto.Tensor.shape: object expected");d.shape=o.onnx.TensorShapeProto.fromObject(i.shape)}return d},r.toObject=function(i,d){d||(d={});var g={};return d.defaults&&(g.elemType=0,g.shape=null),i.elemType!=null&&i.hasOwnProperty("elemType")&&(g.elemType=i.elemType),i.shape!=null&&i.hasOwnProperty("shape")&&(g.shape=o.onnx.TensorShapeProto.toObject(i.shape,d)),g},r.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},r}(),t}(),p.OperatorSetIdProto=function(){function t(e){if(e)for(var r=Object.keys(e),i=0;i<r.length;++i)e[r[i]]!=null&&(this[r[i]]=e[r[i]])}return t.prototype.domain="",t.prototype.version=l.Long?l.Long.fromBits(0,0,!1):0,t.create=function(e){return new t(e)},t.encode=function(e,r){return r||(r=f.create()),e.domain!=null&&e.hasOwnProperty("domain")&&r.uint32(10).string(e.domain),e.version!=null&&e.hasOwnProperty("version")&&r.uint32(16).int64(e.version),r},t.encodeDelimited=function(e,r){return this.encode(e,r).ldelim()},t.decode=function(e,r){e instanceof h||(e=h.create(e));for(var i=r===void 0?e.len:e.pos+r,d=new o.onnx.OperatorSetIdProto;e.pos<i;){var g=e.uint32();switch(g>>>3){case 1:d.domain=e.string();break;case 2:d.version=e.int64();break;default:e.skipType(7&g)}}return d},t.decodeDelimited=function(e){return e instanceof h||(e=new h(e)),this.decode(e,e.uint32())},t.verify=function(e){return typeof e!="object"||e===null?"object expected":e.domain!=null&&e.hasOwnProperty("domain")&&!l.isString(e.domain)?"domain: string expected":e.version!=null&&e.hasOwnProperty("version")&&!(l.isInteger(e.version)||e.version&&l.isInteger(e.version.low)&&l.isInteger(e.version.high))?"version: integer|Long expected":null},t.fromObject=function(e){if(e instanceof o.onnx.OperatorSetIdProto)return e;var r=new o.onnx.OperatorSetIdProto;return e.domain!=null&&(r.domain=String(e.domain)),e.version!=null&&(l.Long?(r.version=l.Long.fromValue(e.version)).unsigned=!1:typeof e.version=="string"?r.version=parseInt(e.version,10):typeof e.version=="number"?r.version=e.version:typeof e.version=="object"&&(r.version=new l.LongBits(e.version.low>>>0,e.version.high>>>0).toNumber())),r},t.toObject=function(e,r){r||(r={});var i={};if(r.defaults)if(i.domain="",l.Long){var d=new l.Long(0,0,!1);i.version=r.longs===String?d.toString():r.longs===Number?d.toNumber():d}else i.version=r.longs===String?"0":0;return e.domain!=null&&e.hasOwnProperty("domain")&&(i.domain=e.domain),e.version!=null&&e.hasOwnProperty("version")&&(typeof e.version=="number"?i.version=r.longs===String?String(e.version):e.version:i.version=r.longs===String?l.Long.prototype.toString.call(e.version):r.longs===Number?new l.LongBits(e.version.low>>>0,e.version.high>>>0).toNumber():e.version),i},t.prototype.toJSON=function(){return this.constructor.toObject(this,s.util.toJSONOptions)},t}(),p),y.exports=o},2100:(y,n,a)=>{y.exports=a(9482)},9482:(y,n,a)=>{var u=n;function c(){u.util._configure(),u.Writer._configure(u.BufferWriter),u.Reader._configure(u.BufferReader)}u.build="minimal",u.Writer=a(1173),u.BufferWriter=a(3155),u.Reader=a(1408),u.BufferReader=a(593),u.util=a(9693),u.rpc=a(5994),u.roots=a(5054),u.configure=c,c()},1408:(y,n,a)=>{y.exports=f;var u,c=a(9693),p=c.LongBits,s=c.utf8;function h(d,g){return RangeError("index out of range: "+d.pos+" + "+(g||1)+" > "+d.len)}function f(d){this.buf=d,this.pos=0,this.len=d.length}var l,o=typeof Uint8Array<"u"?function(d){if(d instanceof Uint8Array||Array.isArray(d))return new f(d);throw Error("illegal buffer")}:function(d){if(Array.isArray(d))return new f(d);throw Error("illegal buffer")},t=function(){return c.Buffer?function(d){return(f.create=function(g){return c.Buffer.isBuffer(g)?new u(g):o(g)})(d)}:o};function e(){var d=new p(0,0),g=0;if(!(this.len-this.pos>4)){for(;g<3;++g){if(this.pos>=this.len)throw h(this);if(d.lo=(d.lo|(127&this.buf[this.pos])<<7*g)>>>0,this.buf[this.pos++]<128)return d}return d.lo=(d.lo|(127&this.buf[this.pos++])<<7*g)>>>0,d}for(;g<4;++g)if(d.lo=(d.lo|(127&this.buf[this.pos])<<7*g)>>>0,this.buf[this.pos++]<128)return d;if(d.lo=(d.lo|(127&this.buf[this.pos])<<28)>>>0,d.hi=(d.hi|(127&this.buf[this.pos])>>4)>>>0,this.buf[this.pos++]<128)return d;if(g=0,this.len-this.pos>4){for(;g<5;++g)if(d.hi=(d.hi|(127&this.buf[this.pos])<<7*g+3)>>>0,this.buf[this.pos++]<128)return d}else for(;g<5;++g){if(this.pos>=this.len)throw h(this);if(d.hi=(d.hi|(127&this.buf[this.pos])<<7*g+3)>>>0,this.buf[this.pos++]<128)return d}throw Error("invalid varint encoding")}function r(d,g){return(d[g-4]|d[g-3]<<8|d[g-2]<<16|d[g-1]<<24)>>>0}function i(){if(this.pos+8>this.len)throw h(this,8);return new p(r(this.buf,this.pos+=4),r(this.buf,this.pos+=4))}f.create=t(),f.prototype._slice=c.Array.prototype.subarray||c.Array.prototype.slice,f.prototype.uint32=(l=4294967295,function(){if(l=(127&this.buf[this.pos])>>>0,this.buf[this.pos++]<128||(l=(l|(127&this.buf[this.pos])<<7)>>>0,this.buf[this.pos++]<128)||(l=(l|(127&this.buf[this.pos])<<14)>>>0,this.buf[this.pos++]<128)||(l=(l|(127&this.buf[this.pos])<<21)>>>0,this.buf[this.pos++]<128)||(l=(l|(15&this.buf[this.pos])<<28)>>>0,this.buf[this.pos++]<128))return l;if((this.pos+=5)>this.len)throw this.pos=this.len,h(this,10);return l}),f.prototype.int32=function(){return 0|this.uint32()},f.prototype.sint32=function(){var d=this.uint32();return d>>>1^-(1&d)|0},f.prototype.bool=function(){return this.uint32()!==0},f.prototype.fixed32=function(){if(this.pos+4>this.len)throw h(this,4);return r(this.buf,this.pos+=4)},f.prototype.sfixed32=function(){if(this.pos+4>this.len)throw h(this,4);return 0|r(this.buf,this.pos+=4)},f.prototype.float=function(){if(this.pos+4>this.len)throw h(this,4);var d=c.float.readFloatLE(this.buf,this.pos);return this.pos+=4,d},f.prototype.double=function(){if(this.pos+8>this.len)throw h(this,4);var d=c.float.readDoubleLE(this.buf,this.pos);return this.pos+=8,d},f.prototype.bytes=function(){var d=this.uint32(),g=this.pos,m=this.pos+d;if(m>this.len)throw h(this,d);return this.pos+=d,Array.isArray(this.buf)?this.buf.slice(g,m):g===m?new this.buf.constructor(0):this._slice.call(this.buf,g,m)},f.prototype.string=function(){var d=this.bytes();return s.read(d,0,d.length)},f.prototype.skip=function(d){if(typeof d=="number"){if(this.pos+d>this.len)throw h(this,d);this.pos+=d}else do if(this.pos>=this.len)throw h(this);while(128&this.buf[this.pos++]);return this},f.prototype.skipType=function(d){switch(d){case 0:this.skip();break;case 1:this.skip(8);break;case 2:this.skip(this.uint32());break;case 3:for(;(d=7&this.uint32())!=4;)this.skipType(d);break;case 5:this.skip(4);break;default:throw Error("invalid wire type "+d+" at offset "+this.pos)}return this},f._configure=function(d){u=d,f.create=t(),u._configure();var g=c.Long?"toLong":"toNumber";c.merge(f.prototype,{int64:function(){return e.call(this)[g](!1)},uint64:function(){return e.call(this)[g](!0)},sint64:function(){return e.call(this).zzDecode()[g](!1)},fixed64:function(){return i.call(this)[g](!0)},sfixed64:function(){return i.call(this)[g](!1)}})}},593:(y,n,a)=>{y.exports=p;var u=a(1408);(p.prototype=Object.create(u.prototype)).constructor=p;var c=a(9693);function p(s){u.call(this,s)}p._configure=function(){c.Buffer&&(p.prototype._slice=c.Buffer.prototype.slice)},p.prototype.string=function(){var s=this.uint32();return this.buf.utf8Slice?this.buf.utf8Slice(this.pos,this.pos=Math.min(this.pos+s,this.len)):this.buf.toString("utf-8",this.pos,this.pos=Math.min(this.pos+s,this.len))},p._configure()},5054:y=>{y.exports={}},5994:(y,n,a)=>{n.Service=a(7948)},7948:(y,n,a)=>{y.exports=c;var u=a(9693);function c(p,s,h){if(typeof p!="function")throw TypeError("rpcImpl must be a function");u.EventEmitter.call(this),this.rpcImpl=p,this.requestDelimited=!!s,this.responseDelimited=!!h}(c.prototype=Object.create(u.EventEmitter.prototype)).constructor=c,c.prototype.rpcCall=function p(s,h,f,l,o){if(!l)throw TypeError("request must be specified");var t=this;if(!o)return u.asPromise(p,t,s,h,f,l);if(t.rpcImpl)try{return t.rpcImpl(s,h[t.requestDelimited?"encodeDelimited":"encode"](l).finish(),function(e,r){if(e)return t.emit("error",e,s),o(e);if(r!==null){if(!(r instanceof f))try{r=f[t.responseDelimited?"decodeDelimited":"decode"](r)}catch(i){return t.emit("error",i,s),o(i)}return t.emit("data",r,s),o(null,r)}t.end(!0)})}catch(e){return t.emit("error",e,s),void setTimeout(function(){o(e)},0)}else setTimeout(function(){o(Error("already ended"))},0)},c.prototype.end=function(p){return this.rpcImpl&&(p||this.rpcImpl(null,null,null),this.rpcImpl=null,this.emit("end").off()),this}},1945:(y,n,a)=>{y.exports=c;var u=a(9693);function c(f,l){this.lo=f>>>0,this.hi=l>>>0}var p=c.zero=new c(0,0);p.toNumber=function(){return 0},p.zzEncode=p.zzDecode=function(){return this},p.length=function(){return 1};var s=c.zeroHash="\0\0\0\0\0\0\0\0";c.fromNumber=function(f){if(f===0)return p;var l=f<0;l&&(f=-f);var o=f>>>0,t=(f-o)/4294967296>>>0;return l&&(t=~t>>>0,o=~o>>>0,++o>4294967295&&(o=0,++t>4294967295&&(t=0))),new c(o,t)},c.from=function(f){if(typeof f=="number")return c.fromNumber(f);if(u.isString(f)){if(!u.Long)return c.fromNumber(parseInt(f,10));f=u.Long.fromString(f)}return f.low||f.high?new c(f.low>>>0,f.high>>>0):p},c.prototype.toNumber=function(f){if(!f&&this.hi>>>31){var l=1+~this.lo>>>0,o=~this.hi>>>0;return l||(o=o+1>>>0),-(l+4294967296*o)}return this.lo+4294967296*this.hi},c.prototype.toLong=function(f){return u.Long?new u.Long(0|this.lo,0|this.hi,!!f):{low:0|this.lo,high:0|this.hi,unsigned:!!f}};var h=String.prototype.charCodeAt;c.fromHash=function(f){return f===s?p:new c((h.call(f,0)|h.call(f,1)<<8|h.call(f,2)<<16|h.call(f,3)<<24)>>>0,(h.call(f,4)|h.call(f,5)<<8|h.call(f,6)<<16|h.call(f,7)<<24)>>>0)},c.prototype.toHash=function(){return String.fromCharCode(255&this.lo,this.lo>>>8&255,this.lo>>>16&255,this.lo>>>24,255&this.hi,this.hi>>>8&255,this.hi>>>16&255,this.hi>>>24)},c.prototype.zzEncode=function(){var f=this.hi>>31;return this.hi=((this.hi<<1|this.lo>>>31)^f)>>>0,this.lo=(this.lo<<1^f)>>>0,this},c.prototype.zzDecode=function(){var f=-(1&this.lo);return this.lo=((this.lo>>>1|this.hi<<31)^f)>>>0,this.hi=(this.hi>>>1^f)>>>0,this},c.prototype.length=function(){var f=this.lo,l=(this.lo>>>28|this.hi<<4)>>>0,o=this.hi>>>24;return o===0?l===0?f<16384?f<128?1:2:f<2097152?3:4:l<16384?l<128?5:6:l<2097152?7:8:o<128?9:10}},9693:function(y,n,a){var u=n;function c(s,h,f){for(var l=Object.keys(h),o=0;o<l.length;++o)s[l[o]]!==void 0&&f||(s[l[o]]=h[l[o]]);return s}function p(s){function h(f,l){if(!(this instanceof h))return new h(f,l);Object.defineProperty(this,"message",{get:function(){return f}}),Error.captureStackTrace?Error.captureStackTrace(this,h):Object.defineProperty(this,"stack",{value:new Error().stack||""}),l&&c(this,l)}return(h.prototype=Object.create(Error.prototype)).constructor=h,Object.defineProperty(h.prototype,"name",{get:function(){return s}}),h.prototype.toString=function(){return this.name+": "+this.message},h}u.asPromise=a(4537),u.base64=a(7419),u.EventEmitter=a(9211),u.float=a(945),u.inquire=a(7199),u.utf8=a(4997),u.pool=a(6662),u.LongBits=a(1945),u.isNode=!!(a.g!==void 0&&a.g&&a.g.process&&a.g.process.versions&&a.g.process.versions.node),u.global=u.isNode&&a.g||typeof window<"u"&&window||typeof self<"u"&&self||this,u.emptyArray=Object.freeze?Object.freeze([]):[],u.emptyObject=Object.freeze?Object.freeze({}):{},u.isInteger=Number.isInteger||function(s){return typeof s=="number"&&isFinite(s)&&Math.floor(s)===s},u.isString=function(s){return typeof s=="string"||s instanceof String},u.isObject=function(s){return s&&typeof s=="object"},u.isset=u.isSet=function(s,h){var f=s[h];return!(f==null||!s.hasOwnProperty(h))&&(typeof f!="object"||(Array.isArray(f)?f.length:Object.keys(f).length)>0)},u.Buffer=function(){try{var s=u.inquire("buffer").Buffer;return s.prototype.utf8Write?s:null}catch{return null}}(),u._Buffer_from=null,u._Buffer_allocUnsafe=null,u.newBuffer=function(s){return typeof s=="number"?u.Buffer?u._Buffer_allocUnsafe(s):new u.Array(s):u.Buffer?u._Buffer_from(s):typeof Uint8Array>"u"?s:new Uint8Array(s)},u.Array=typeof Uint8Array<"u"?Uint8Array:Array,u.Long=u.global.dcodeIO&&u.global.dcodeIO.Long||u.global.Long||u.inquire("long"),u.key2Re=/^true|false|0|1$/,u.key32Re=/^-?(?:0|[1-9][0-9]*)$/,u.key64Re=/^(?:[\\x00-\\xff]{8}|-?(?:0|[1-9][0-9]*))$/,u.longToHash=function(s){return s?u.LongBits.from(s).toHash():u.LongBits.zeroHash},u.longFromHash=function(s,h){var f=u.LongBits.fromHash(s);return u.Long?u.Long.fromBits(f.lo,f.hi,h):f.toNumber(!!h)},u.merge=c,u.lcFirst=function(s){return s.charAt(0).toLowerCase()+s.substring(1)},u.newError=p,u.ProtocolError=p("ProtocolError"),u.oneOfGetter=function(s){for(var h={},f=0;f<s.length;++f)h[s[f]]=1;return function(){for(var l=Object.keys(this),o=l.length-1;o>-1;--o)if(h[l[o]]===1&&this[l[o]]!==void 0&&this[l[o]]!==null)return l[o]}},u.oneOfSetter=function(s){return function(h){for(var f=0;f<s.length;++f)s[f]!==h&&delete this[s[f]]}},u.toJSONOptions={longs:String,enums:String,bytes:String,json:!0},u._configure=function(){var s=u.Buffer;s?(u._Buffer_from=s.from!==Uint8Array.from&&s.from||function(h,f){return new s(h,f)},u._Buffer_allocUnsafe=s.allocUnsafe||function(h){return new s(h)}):u._Buffer_from=u._Buffer_allocUnsafe=null}},1173:(y,n,a)=>{y.exports=t;var u,c=a(9693),p=c.LongBits,s=c.base64,h=c.utf8;function f(b,_,v){this.fn=b,this.len=_,this.next=void 0,this.val=v}function l(){}function o(b){this.head=b.head,this.tail=b.tail,this.len=b.len,this.next=b.states}function t(){this.len=0,this.head=new f(l,0,0),this.tail=this.head,this.states=null}var e=function(){return c.Buffer?function(){return(t.create=function(){return new u})()}:function(){return new t}};function r(b,_,v){_[v]=255&b}function i(b,_){this.len=b,this.next=void 0,this.val=_}function d(b,_,v){for(;b.hi;)_[v++]=127&b.lo|128,b.lo=(b.lo>>>7|b.hi<<25)>>>0,b.hi>>>=7;for(;b.lo>127;)_[v++]=127&b.lo|128,b.lo=b.lo>>>7;_[v++]=b.lo}function g(b,_,v){_[v]=255&b,_[v+1]=b>>>8&255,_[v+2]=b>>>16&255,_[v+3]=b>>>24}t.create=e(),t.alloc=function(b){return new c.Array(b)},c.Array!==Array&&(t.alloc=c.pool(t.alloc,c.Array.prototype.subarray)),t.prototype._push=function(b,_,v){return this.tail=this.tail.next=new f(b,_,v),this.len+=_,this},i.prototype=Object.create(f.prototype),i.prototype.fn=function(b,_,v){for(;b>127;)_[v++]=127&b|128,b>>>=7;_[v]=b},t.prototype.uint32=function(b){return this.len+=(this.tail=this.tail.next=new i((b>>>=0)<128?1:b<16384?2:b<2097152?3:b<268435456?4:5,b)).len,this},t.prototype.int32=function(b){return b<0?this._push(d,10,p.fromNumber(b)):this.uint32(b)},t.prototype.sint32=function(b){return this.uint32((b<<1^b>>31)>>>0)},t.prototype.uint64=function(b){var _=p.from(b);return this._push(d,_.length(),_)},t.prototype.int64=t.prototype.uint64,t.prototype.sint64=function(b){var _=p.from(b).zzEncode();return this._push(d,_.length(),_)},t.prototype.bool=function(b){return this._push(r,1,b?1:0)},t.prototype.fixed32=function(b){return this._push(g,4,b>>>0)},t.prototype.sfixed32=t.prototype.fixed32,t.prototype.fixed64=function(b){var _=p.from(b);return this._push(g,4,_.lo)._push(g,4,_.hi)},t.prototype.sfixed64=t.prototype.fixed64,t.prototype.float=function(b){return this._push(c.float.writeFloatLE,4,b)},t.prototype.double=function(b){return this._push(c.float.writeDoubleLE,8,b)};var m=c.Array.prototype.set?function(b,_,v){_.set(b,v)}:function(b,_,v){for(var w=0;w<b.length;++w)_[v+w]=b[w]};t.prototype.bytes=function(b){var _=b.length>>>0;if(!_)return this._push(r,1,0);if(c.isString(b)){var v=t.alloc(_=s.length(b));s.decode(b,v,0),b=v}return this.uint32(_)._push(m,_,b)},t.prototype.string=function(b){var _=h.length(b);return _?this.uint32(_)._push(h.write,_,b):this._push(r,1,0)},t.prototype.fork=function(){return this.states=new o(this),this.head=this.tail=new f(l,0,0),this.len=0,this},t.prototype.reset=function(){return this.states?(this.head=this.states.head,this.tail=this.states.tail,this.len=this.states.len,this.states=this.states.next):(this.head=this.tail=new f(l,0,0),this.len=0),this},t.prototype.ldelim=function(){var b=this.head,_=this.tail,v=this.len;return this.reset().uint32(v),v&&(this.tail.next=b.next,this.tail=_,this.len+=v),this},t.prototype.finish=function(){for(var b=this.head.next,_=this.constructor.alloc(this.len),v=0;b;)b.fn(b.val,_,v),v+=b.len,b=b.next;return _},t._configure=function(b){u=b,t.create=e(),u._configure()}},3155:(y,n,a)=>{y.exports=p;var u=a(1173);(p.prototype=Object.create(u.prototype)).constructor=p;var c=a(9693);function p(){u.call(this)}function s(h,f,l){h.length<40?c.utf8.write(h,f,l):f.utf8Write?f.utf8Write(h,l):f.write(h,l)}p._configure=function(){p.alloc=c._Buffer_allocUnsafe,p.writeBytesBuffer=c.Buffer&&c.Buffer.prototype instanceof Uint8Array&&c.Buffer.prototype.set.name==="set"?function(h,f,l){f.set(h,l)}:function(h,f,l){if(h.copy)h.copy(f,l,0,h.length);else for(var o=0;o<h.length;)f[l++]=h[o++]}},p.prototype.bytes=function(h){c.isString(h)&&(h=c._Buffer_from(h,"base64"));var f=h.length>>>0;return this.uint32(f),f&&this._push(p.writeBytesBuffer,f,h),this},p.prototype.string=function(h){var f=c.Buffer.byteLength(h);return this.uint32(f),f&&this._push(s,f,h),this},p._configure()},7714:(y,n,a)=>{n.R=void 0;const u=a(6919),c=a(7448);n.R=new class{async init(){}async createSessionHandler(p,s){const h=new u.Session(s);return await h.loadModel(p),new c.OnnxjsSessionHandler(h)}}},4200:(y,n,a)=>{n.c8=n.rX=void 0;const u=a(1670),c=a(5381),p=a(2157),s=a(2306);n.rX=()=>{if((typeof u.env.wasm.initTimeout!="number"||u.env.wasm.initTimeout<0)&&(u.env.wasm.initTimeout=0),typeof u.env.wasm.simd!="boolean"&&(u.env.wasm.simd=!0),typeof u.env.wasm.proxy!="boolean"&&(u.env.wasm.proxy=!1),typeof u.env.wasm.numThreads!="number"||!Number.isInteger(u.env.wasm.numThreads)||u.env.wasm.numThreads<=0){const h=typeof navigator>"u"?(0,c.cpus)().length:navigator.hardwareConcurrency;u.env.wasm.numThreads=Math.min(4,Math.ceil((h||1)/2))}},n.c8=new class{async init(){(0,n.rX)(),await(0,p.initWasm)()}async createSessionHandler(h,f){const l=new s.OnnxruntimeWebAssemblySessionHandler;return await l.loadModel(h,f),Promise.resolve(l)}}},6018:function(y,n,a){var u=this&&this.__createBinding||(Object.create?function(s,h,f,l){l===void 0&&(l=f);var o=Object.getOwnPropertyDescriptor(h,f);o&&!("get"in o?!h.__esModule:o.writable||o.configurable)||(o={enumerable:!0,get:function(){return h[f]}}),Object.defineProperty(s,l,o)}:function(s,h,f,l){l===void 0&&(l=f),s[l]=h[f]}),c=this&&this.__exportStar||function(s,h){for(var f in s)f==="default"||Object.prototype.hasOwnProperty.call(h,f)||u(h,s,f)};Object.defineProperty(n,"__esModule",{value:!0}),c(a(1670),n);const p=a(1670);{const s=a(7714).R;(0,p.registerBackend)("webgl",s,-10)}{const s=a(4200).c8;(0,p.registerBackend)("cpu",s,10),(0,p.registerBackend)("wasm",s,10),(0,p.registerBackend)("xnnpack",s,9)}},246:(y,n)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createAttributeWithCacheKey=void 0;class a{constructor(c){Object.assign(this,c)}get cacheKey(){return this._cacheKey||(this._cacheKey=Object.getOwnPropertyNames(this).sort().map(c=>`${this[c]}`).join(";")),this._cacheKey}}n.createAttributeWithCacheKey=u=>new a(u)},7778:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.Attribute=void 0;const u=a(1446),c=a(9395),p=a(9162),s=a(2517);var h=c.onnxruntime.experimental.fbs;class f{constructor(o){if(this._attributes=new Map,o!=null){for(const t of o)t instanceof u.onnx.AttributeProto?this._attributes.set(t.name,[f.getValue(t),f.getType(t)]):t instanceof h.Attribute&&this._attributes.set(t.name(),[f.getValue(t),f.getType(t)]);if(this._attributes.size<o.length)throw new Error("duplicated attribute names")}}set(o,t,e){this._attributes.set(o,[e,t])}delete(o){this._attributes.delete(o)}getFloat(o,t){return this.get(o,"float",t)}getInt(o,t){return this.get(o,"int",t)}getString(o,t){return this.get(o,"string",t)}getTensor(o,t){return this.get(o,"tensor",t)}getFloats(o,t){return this.get(o,"floats",t)}getInts(o,t){return this.get(o,"ints",t)}getStrings(o,t){return this.get(o,"strings",t)}getTensors(o,t){return this.get(o,"tensors",t)}get(o,t,e){const r=this._attributes.get(o);if(r===void 0){if(e!==void 0)return e;throw new Error(`required attribute not found: ${o}`)}if(r[1]!==t)throw new Error(`type mismatch: expected ${t} but got ${r[1]}`);return r[0]}static getType(o){const t=o instanceof u.onnx.AttributeProto?o.type:o.type();switch(t){case u.onnx.AttributeProto.AttributeType.FLOAT:return"float";case u.onnx.AttributeProto.AttributeType.INT:return"int";case u.onnx.AttributeProto.AttributeType.STRING:return"string";case u.onnx.AttributeProto.AttributeType.TENSOR:return"tensor";case u.onnx.AttributeProto.AttributeType.FLOATS:return"floats";case u.onnx.AttributeProto.AttributeType.INTS:return"ints";case u.onnx.AttributeProto.AttributeType.STRINGS:return"strings";case u.onnx.AttributeProto.AttributeType.TENSORS:return"tensors";default:throw new Error(`attribute type is not supported yet: ${u.onnx.AttributeProto.AttributeType[t]}`)}}static getValue(o){const t=o instanceof u.onnx.AttributeProto?o.type:o.type();if(t===u.onnx.AttributeProto.AttributeType.GRAPH||t===u.onnx.AttributeProto.AttributeType.GRAPHS)throw new Error("graph attribute is not supported yet");const e=this.getValueNoCheck(o);if(t===u.onnx.AttributeProto.AttributeType.INT&&s.LongUtil.isLong(e))return s.LongUtil.longToNumber(e);if(t===u.onnx.AttributeProto.AttributeType.INTS){const r=e,i=new Array(r.length);for(let d=0;d<r.length;d++){const g=r[d];i[d]=s.LongUtil.longToNumber(g)}return i}if(t===u.onnx.AttributeProto.AttributeType.TENSOR)return o instanceof u.onnx.AttributeProto?p.Tensor.fromProto(e):p.Tensor.fromOrtTensor(e);if(t===u.onnx.AttributeProto.AttributeType.TENSORS){if(o instanceof u.onnx.AttributeProto)return e.map(r=>p.Tensor.fromProto(r));if(o instanceof h.Attribute)return e.map(r=>p.Tensor.fromOrtTensor(r))}if(t===u.onnx.AttributeProto.AttributeType.STRING&&o instanceof u.onnx.AttributeProto){const r=e;return(0,s.decodeUtf8String)(r)}return t===u.onnx.AttributeProto.AttributeType.STRINGS&&o instanceof u.onnx.AttributeProto?e.map(s.decodeUtf8String):e}static getValueNoCheck(o){return o instanceof u.onnx.AttributeProto?this.getValueNoCheckFromOnnxFormat(o):this.getValueNoCheckFromOrtFormat(o)}static getValueNoCheckFromOnnxFormat(o){switch(o.type){case u.onnx.AttributeProto.AttributeType.FLOAT:return o.f;case u.onnx.AttributeProto.AttributeType.INT:return o.i;case u.onnx.AttributeProto.AttributeType.STRING:return o.s;case u.onnx.AttributeProto.AttributeType.TENSOR:return o.t;case u.onnx.AttributeProto.AttributeType.GRAPH:return o.g;case u.onnx.AttributeProto.AttributeType.FLOATS:return o.floats;case u.onnx.AttributeProto.AttributeType.INTS:return o.ints;case u.onnx.AttributeProto.AttributeType.STRINGS:return o.strings;case u.onnx.AttributeProto.AttributeType.TENSORS:return o.tensors;case u.onnx.AttributeProto.AttributeType.GRAPHS:return o.graphs;default:throw new Error(`unsupported attribute type: ${u.onnx.AttributeProto.AttributeType[o.type]}`)}}static getValueNoCheckFromOrtFormat(o){switch(o.type()){case h.AttributeType.FLOAT:return o.f();case h.AttributeType.INT:return o.i();case h.AttributeType.STRING:return o.s();case h.AttributeType.TENSOR:return o.t();case h.AttributeType.GRAPH:return o.g();case h.AttributeType.FLOATS:return o.floatsArray();case h.AttributeType.INTS:{const t=[];for(let e=0;e<o.intsLength();e++)t.push(o.ints(e));return t}case h.AttributeType.STRINGS:{const t=[];for(let e=0;e<o.stringsLength();e++)t.push(o.strings(e));return t}case h.AttributeType.TENSORS:{const t=[];for(let e=0;e<o.tensorsLength();e++)t.push(o.tensors(e));return t}default:throw new Error(`unsupported attribute type: ${h.AttributeType[o.type()]}`)}}}n.Attribute=f},7091:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.resolveBackend=n.backend=void 0;const u=a(5038),c=new Map;async function p(s){const h=n.backend;if(h[s]!==void 0&&function(f){const l=f;return"initialize"in l&&typeof l.initialize=="function"&&"createSessionHandler"in l&&typeof l.createSessionHandler=="function"&&"dispose"in l&&typeof l.dispose=="function"}(h[s])){const f=h[s];let l=f.initialize();if(typeof l=="object"&&"then"in l&&(l=await l),l)return c.set(s,f),f}}n.backend={webgl:new u.WebGLBackend},n.resolveBackend=async function s(h){if(!h)return s(["webgl"]);{const f=typeof h=="string"?[h]:h;for(const l of f){const o=c.get(l);if(o)return o;const t=await p(l);if(t)return t}}throw new Error("no available backend to use")}},5038:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.WebGLBackend=void 0;const u=a(1670),c=a(6231),p=a(6416),s=a(7305);n.WebGLBackend=class{get contextId(){return u.env.webgl.contextId}set contextId(h){u.env.webgl.contextId=h}get matmulMaxBatchSize(){return u.env.webgl.matmulMaxBatchSize}set matmulMaxBatchSize(h){u.env.webgl.matmulMaxBatchSize=h}get textureCacheMode(){return u.env.webgl.textureCacheMode}set textureCacheMode(h){u.env.webgl.textureCacheMode=h}get pack(){return u.env.webgl.pack}set pack(h){u.env.webgl.pack=h}get async(){return u.env.webgl.async}set async(h){u.env.webgl.async=h}initialize(){try{return this.glContext=(0,s.createWebGLContext)(this.contextId),typeof this.matmulMaxBatchSize!="number"&&(this.matmulMaxBatchSize=16),typeof this.textureCacheMode!="string"&&(this.textureCacheMode="full"),typeof this.pack!="boolean"&&(this.pack=!1),typeof this.async!="boolean"&&(this.async=!1),c.Logger.setWithEnv(u.env),c.Logger.verbose("WebGLBackend",`Created WebGLContext: ${typeof this.glContext} with matmulMaxBatchSize: ${this.matmulMaxBatchSize}; textureCacheMode: ${this.textureCacheMode}; pack: ${this.pack}; async: ${this.async}.`),!0}catch(h){return c.Logger.warning("WebGLBackend",`Unable to initialize WebGLBackend. ${h}`),!1}}createSessionHandler(h){return new p.WebGLSessionHandler(this,h)}dispose(){this.glContext.dispose()}}},5107:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.CoordsGlslLib=void 0;const u=a(2517),c=a(8520),p=a(5060),s=a(7859),h=a(9390);class f extends c.GlslLib{constructor(o){super(o)}getFunctions(){return Object.assign(Object.assign(Object.assign(Object.assign(Object.assign(Object.assign(Object.assign({},this.offsetToCoords()),this.coordsToOffset()),this.toVec()),this.valueFrom()),this.getCommonUtilFuncs()),this.getInputsSamplingSnippets()),this.getOutputSamplingSnippet())}getCustomTypes(){return{}}offsetToCoords(){return{offsetToCoords:new c.GlslLibRoutine(` - vec2 offsetToCoords(int offset, int width, int height) { - int t = offset / width; - int s = offset - t*width; - vec2 coords = (vec2(s,t) + vec2(0.5,0.5)) / vec2(width, height); - return coords; - } - `)}}coordsToOffset(){return{coordsToOffset:new c.GlslLibRoutine(` - int coordsToOffset(vec2 coords, int width, int height) { - float s = coords.s * float(width); - float t = coords.t * float(height); - int offset = int(t) * width + int(s); - return offset; - } - `)}}getOutputSamplingSnippet(){const o=this.context.outputTextureLayout;return o.isPacked?this.getPackedOutputSamplingSnippet(o):this.getUnpackedOutputSamplingSnippet(o)}getPackedOutputSamplingSnippet(o){const t=o.unpackedShape,e=[o.width,o.height],r={},i="getOutputCoords";switch(t.length){case 0:r[i]=this.getOutputScalarCoords();break;case 1:r[i]=this.getOutputPacked1DCoords(t,e);break;case 2:r[i]=this.getOutputPacked2DCoords(t,e);break;case 3:r[i]=this.getOutputPacked3DCoords(t,e);break;default:r[i]=this.getOutputPackedNDCoords(t,e)}const d=` - void setOutput(vec4 val) { - ${(0,p.getGlsl)(this.context.glContext.version).output} = val; - } - `;return r.floatTextureSetRGBA=new c.GlslLibRoutine(d),r}getUnpackedOutputSamplingSnippet(o){const t=o.unpackedShape,e=[o.width,o.height],r={},i="getOutputCoords";switch(t.length){case 0:r[i]=this.getOutputScalarCoords();break;case 1:r[i]=this.getOutputUnpacked1DCoords(t,e);break;case 2:r[i]=this.getOutputUnpacked2DCoords(t,e);break;case 3:r[i]=this.getOutputUnpacked3DCoords(t,e);break;case 4:r[i]=this.getOutputUnpacked4DCoords(t,e);break;case 5:r[i]=this.getOutputUnpacked5DCoords(t,e);break;case 6:r[i]=this.getOutputUnpacked6DCoords(t,e);break;default:throw new Error(`Unsupported output dimensionality: ${t.length}`)}const d=` - void setOutput(float val) { - ${(0,p.getGlsl)(this.context.glContext.version).output} = vec4(val, 0, 0, 0); - } - `;return r.floatTextureSetR=new c.GlslLibRoutine(d),r}getOutputScalarCoords(){return new c.GlslLibRoutine(` - int getOutputCoords() { - return 0; - } - `)}getOutputPacked1DCoords(o,t){const e=t;let r="";return e[0]===1?(r=` - int getOutputCoords() { - return 2 * int(TexCoords.y * ${e[1]}.0); - } - `,new c.GlslLibRoutine(r)):e[1]===1?(r=` - int getOutputCoords() { - return 2 * int(TexCoords.x * ${e[0]}.0); - } - `,new c.GlslLibRoutine(r)):(r=` - int getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${e[0]}, ${e[1]})); - return 2 * (resTexRC.y * ${e[0]} + resTexRC.x); - } - `,new c.GlslLibRoutine(r))}getOutputPacked2DCoords(o,t){let e="";if(u.ArrayUtil.arraysEqual(o,t))return e=` - ivec2 getOutputCoords() { - return 2 * ivec2(TexCoords.xy * vec2(${t[0]}, ${t[1]})); - } - `,new c.GlslLibRoutine(e);const r=t,i=Math.ceil(o[1]/2);return e=` - ivec2 getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${r[0]}, ${r[1]})); - - int index = resTexRC.y * ${r[0]} + resTexRC.x; - - // reverse r and c order for packed texture - int r = imod(index, ${i}) * 2; - int c = 2 * (index / ${i}); - - return ivec2(r, c); - } - `,new c.GlslLibRoutine(e)}getOutputPacked3DCoords(o,t){const e=[t[0],t[1]],r=Math.ceil(o[2]/2),i=r*Math.ceil(o[1]/2),d=` - ivec3 getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${e[0]}, ${e[1]})); - int index = resTexRC.y * ${e[0]} + resTexRC.x; - - int b = index / ${i}; - index -= b * ${i}; - - // reverse r and c order for packed texture - int r = imod(index, ${r}) * 2; - int c = 2 * (index / ${r}); - - return ivec3(b, r, c); - } - `;return new c.GlslLibRoutine(d)}getOutputPackedNDCoords(o,t){const e=[t[0],t[1]],r=Math.ceil(o[o.length-1]/2),i=r*Math.ceil(o[o.length-2]/2);let d=i,g="",m="b, r, c";for(let _=2;_<o.length-1;_++)d*=o[o.length-_-1],g=` - int b${_} = index / ${d}; - index -= b${_} * ${d}; - `+g,m=`b${_}, `+m;const b=` - ivec${o.length} getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${e[0]}, ${e[1]})); - int index = resTexRC.y * ${e[0]} + resTexRC.x; - - ${g} - - int b = index / ${i}; - index -= b * ${i}; - - // reverse r and c order for packed texture - int r = imod(index, ${r}) * 2; - int c = 2 * (index / ${r}); - - return ivec${o.length}(${m}); - } - `;return new c.GlslLibRoutine(b)}getOutputUnpacked1DCoords(o,t){const e=` - int getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${t[0]}, ${t[1]})); - return resTexRC.y * ${t[0]} + resTexRC.x; - } - `;return new c.GlslLibRoutine(e)}getOutputUnpacked2DCoords(o,t){const e=` - ivec2 getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${t[0]}, ${t[1]})); - int index = resTexRC.y * ${t[0]} + resTexRC.x; - int r = index / ${o[1]}; - int c = index - r * ${o[1]}; - return ivec2(r, c); - } - `;return new c.GlslLibRoutine(e)}getOutputUnpacked3DCoords(o,t){let e="";const r=o.length;let i=null;r<2&&(i=[]),i=new Array(r-1),i[r-2]=o[r-1];for(let m=r-3;m>=0;--m)i[m]=i[m+1]*o[m+1];const d=["r","c","d"],g=i.map((m,b)=>`int ${d[b]} = index / ${m}; ${b===i.length-1?`int ${d[b+1]} = index - ${d[b]} * ${m}`:`index -= ${d[b]} * ${m}`};`).join("");return e=` - ivec3 getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${t[0]}, ${t[1]})); - int index = resTexRC.y * ${t[0]} + resTexRC.x; - ${g} - return ivec3(r, c, d); - } - `,new c.GlslLibRoutine(e)}getOutputUnpacked4DCoords(o,t){let e="";const r=o.length;let i=null;r<2&&(i=[]),i=new Array(r-1),i[r-2]=o[r-1];for(let m=r-3;m>=0;--m)i[m]=i[m+1]*o[m+1];const d=["r","c","d","d2"],g=i.map((m,b)=>`int ${d[b]} = index / ${m}; ${b===i.length-1?`int ${d[b+1]} = index - ${d[b]} * ${m}`:`index -= ${d[b]} * ${m}`};`).join("");return e=` - ivec4 getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${t[0]}, ${t[1]})); - int index = resTexRC.y * ${t[0]} + resTexRC.x; - ${g} - return ivec4(r, c, d, d2); - } - `,new c.GlslLibRoutine(e)}getOutputUnpacked5DCoords(o,t){let e="";const r=o.length;let i=null;r<2&&(i=[]),i=new Array(r-1),i[r-2]=o[r-1];for(let m=r-3;m>=0;--m)i[m]=i[m+1]*o[m+1];const d=["r","c","d","d2","d3"],g=i.map((m,b)=>`int ${d[b]} = index / ${m}; ${b===i.length-1?`int ${d[b+1]} = index - ${d[b]} * ${m}`:`index -= ${d[b]} * ${m}`};`).join("");return e=` - ivec5 getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${t[0]}, ${t[1]})); - int index = resTexRC.y * ${t[0]} + resTexRC.x; - ${g} - return ivec5(r, c, d, d2, d3); - } - `,new c.GlslLibRoutine(e)}getOutputUnpacked6DCoords(o,t){let e="";const r=o.length;let i=null;r<2&&(i=[]),i=new Array(r-1),i[r-2]=o[r-1];for(let m=r-3;m>=0;--m)i[m]=i[m+1]*o[m+1];const d=["r","c","d","d2","d3","d4"],g=i.map((m,b)=>`int ${d[b]} = index / ${m}; ${b===i.length-1?`int ${d[b+1]} = index - ${d[b]} * ${m}`:`index -= ${d[b]} * ${m}`};`).join("");return e=` - ivec6 getOutputCoords() { - ivec2 resTexRC = ivec2(TexCoords.xy * - vec2(${t[0]}, ${t[1]})); - int index = resTexRC.y * ${t[0]} + resTexRC.x; - ${g} - return ivec6(r, c, d, d2, d3, d4); - } - `,new c.GlslLibRoutine(e)}getCommonUtilFuncs(){const o={};let t="uvFromFlat";o[t]=new c.GlslLibRoutine(` - vec2 uvFromFlat(int texNumR, int texNumC, int index) { - int texC = index / texNumR; - int texR = index - texC * texNumR; - // TODO: swap texR, texC order in following function so row is corresponding to u and column is corresponding to - // v. - return (vec2(texR, texC) + halfCR) / vec2(texNumR, texNumC); - } - `),t="packedUVfrom1D",o[t]=new c.GlslLibRoutine(` - vec2 packedUVfrom1D(int texNumR, int texNumC, int index) { - int texelIndex = index / 2; - int texR = texelIndex / texNumC; - int texC = texelIndex - texR * texNumC; - return (vec2(texC, texR) + halfCR) / vec2(texNumC, texNumR); - } - `),t="packedUVfrom2D",o[t]=new c.GlslLibRoutine(` - vec2 packedUVfrom2D(int texNumR, int texNumC, int texelsInLogicalRow, int row, int col) { - int texelIndex = (row / 2) * texelsInLogicalRow + (col / 2); - int texR = texelIndex / texNumC; - int texC = texelIndex - texR * texNumC; - return (vec2(texC, texR) + halfCR) / vec2(texNumC, texNumR); - } - `),t="packedUVfrom3D",o[t]=new c.GlslLibRoutine(` - vec2 packedUVfrom3D(int texNumR, int texNumC, - int texelsInBatch, int texelsInLogicalRow, int b, - int row, int col) { - int index = b * texelsInBatch + (row / 2) * texelsInLogicalRow + (col / 2); - int texR = index / texNumC; - int texC = index - texR * texNumC; - return (vec2(texC, texR) + halfCR) / vec2(texNumC, texNumR); - } - `),t="sampleTexture";const e=(0,p.getGlsl)(this.context.glContext.version);return o[t]=new c.GlslLibRoutine(` - float sampleTexture(sampler2D textureSampler, vec2 uv) { - return ${e.texture2D}(textureSampler, uv).r; - }`),o}getInputsSamplingSnippets(){const o={},t=this.context.outputTextureLayout;return this.context.programInfo.inputNames.forEach((e,r)=>{const i=this.context.inputTextureLayouts[r],d=(0,h.generateShaderFuncNameFromInputSamplerName)(e);i.isPacked?o[d]=this.getPackedSamplerFromInput(d,e,i):o[d]=this.getUnpackedSamplerFromInput(d,e,i);const g=(0,h.generateShaderFuncNameFromInputSamplerNameAtOutCoords)(e);i.unpackedShape.length<=t.unpackedShape.length&&(i.isPacked?o[g]=this.getPackedSamplerAtOutputCoords(g,i,t,e):o[g]=this.getUnpackedSamplerAtOutputCoords(g,i,t,e))}),o}getPackedSamplerAtOutputCoords(o,t,e,r){const i=t.unpackedShape,d=e.unpackedShape,g=r,m=(0,h.generateShaderFuncNameFromInputSamplerName)(g),b=i.length,_=d.length,v=u.BroadcastUtil.getBroadcastDims(i,d),w=(0,h.getCoordsDataType)(_),S=_-b;let A;const O=(0,h.getGlChannels)();A=b===0?"":_<2&&v.length>=1?"coords = 0;":v.map(F=>`coords.${O[F+S]} = 0;`).join(` -`);let x="";x=_<2&&b>0?"coords":i.map((F,H)=>`coords.${O[H+S]}`).join(", ");let I="return outputValue;";const N=u.ShapeUtil.size(i)===1,B=u.ShapeUtil.size(d)===1;if(b!==1||N||B){if(N&&!B)I=_===1?` - return vec4(outputValue.x, outputValue.x, 0., 0.); - `:` - return vec4(outputValue.x); - `;else if(v.length){const F=b-2,H=b-1;v.indexOf(F)>-1&&v.indexOf(H)>-1?I="return vec4(outputValue.x);":v.indexOf(F)>-1?I="return vec4(outputValue.x, outputValue.y, outputValue.x, outputValue.y);":v.indexOf(H)>-1&&(I="return vec4(outputValue.xx, outputValue.zz);")}}else I=` - return vec4(outputValue.xy, outputValue.xy); - `;const L=` - vec4 ${o}() { - ${w} coords = getOutputCoords(); - - int lastDim = coords.${O[_-1]}; - coords.${O[_-1]} = coords.${O[_-2]}; - coords.${O[_-2]} = lastDim; - - ${A} - vec4 outputValue = ${m}(${x}); - ${I} - } - `;return new c.GlslLibRoutine(L,["coordinates.getOutputCoords"])}getUnpackedSamplerAtOutputCoords(o,t,e,r){const i=[e.width,e.height],d=[t.width,t.height],g=t.unpackedShape.length,m=e.unpackedShape.length,b=t.unpackedShape,_=e.unpackedShape,v=(0,h.generateShaderFuncNameFromInputSamplerName)(r);if(g===m&&u.ArrayUtil.arraysEqual(d,i)){const B=` - float ${o}() { - return sampleTexture(${r}, TexCoords); - } - `;return new c.GlslLibRoutine(B,["coordinates.sampleTexture"])}const w=(0,h.getCoordsDataType)(m),S=u.BroadcastUtil.getBroadcastDims(b,_),A=m-g;let O;const x=(0,h.getGlChannels)();O=g===0?"":m<2&&S.length>=1?"coords = 0;":S.map(B=>`coords.${x[B+A]} = 0;`).join(` -`);let I="";I=m<2&&g>0?"coords":t.unpackedShape.map((B,L)=>`coords.${x[L+A]}`).join(", ");const N=` - float ${o}() { - ${w} coords = getOutputCoords(); - ${O} - return ${v}(${I}); - } - `;return new c.GlslLibRoutine(N,["coordinates.getOutputCoords"])}getPackedSamplerFromInput(o,t,e){switch(e.unpackedShape.length){case 0:return this.getPackedSamplerScalar(o,t);case 1:return this.getPackedSampler1D(o,t,e);case 2:return this.getPackedSampler2D(o,t,e);case 3:return this.getPackedSampler3D(o,t,e);default:return this.getPackedSamplerND(o,t,e)}}getUnpackedSamplerFromInput(o,t,e){const r=e.unpackedShape;switch(r.length){case 0:return this.getUnpackedSamplerScalar(o,t,e);case 1:return this.getUnpackedSampler1D(o,t,e);case 2:return this.getUnpackedSampler2D(o,t,e);case 3:return this.getUnpackedSampler3D(o,t,e);case 4:return this.getUnpackedSampler4D(o,t,e);case 5:return this.getUnpackedSampler5D(o,t,e);case 6:return this.getUnpackedSampler6D(o,t,e);default:throw new Error(`Unsupported dimension ${r.length}-D`)}}getPackedSamplerScalar(o,t){const e=` - vec4 ${o}() { - return ${(0,p.getGlsl)(this.context.glContext.version).texture2D}(${t}, halfCR); - } - `;return new c.GlslLibRoutine(e)}getPackedSampler1D(o,t,e){const r=[e.width,e.height],i=[r[1],r[0]],d=(0,p.getGlsl)(this.context.glContext.version),g=`vec4 ${o}(int index) { - vec2 uv = packedUVfrom1D( - ${i[0]}, ${i[1]}, index); - return ${d.texture2D}(${t}, uv); - }`;return new c.GlslLibRoutine(g,["coordinates.packedUVfrom1D"])}getPackedSampler2D(o,t,e){const r=e.unpackedShape,i=[e.width,e.height],d=(0,p.getGlsl)(this.context.glContext.version),g=i[0],m=i[1];if(i!=null&&u.ArrayUtil.arraysEqual(r,i)){const w=`vec4 ${o}(int row, int col) { - vec2 uv = (vec2(col, row) + halfCR) / vec2(${m}.0, ${g}.0); - return ${d.texture2D}(${t}, uv); - }`;return new c.GlslLibRoutine(w)}const b=i,_=Math.ceil(r[1]/2),v=`vec4 ${o}(int row, int col) { - vec2 uv = packedUVfrom2D(${b[1]}, ${b[0]}, ${_}, row, col); - return ${d.texture2D}(${t}, uv); - }`;return new c.GlslLibRoutine(v,["coordinates.packedUVfrom2D"])}getPackedSampler3D(o,t,e){const r=e.unpackedShape,i=[e.width,e.height],d=[i[0],i[1]],g=(0,p.getGlsl)(this.context.glContext.version);if(r[0]===1){const w=r.slice(1),S=[1,2],A=(0,h.squeezeInputShape)(r,w),O=["b","row","col"],x=JSON.parse(JSON.stringify(e));x.unpackedShape=A;const I=this.getPackedSamplerFromInput(o,t,x),N=`${I.routineBody} - vec4 ${o}(int b, int row, int col) { - return ${o}(${(0,h.getSqueezedParams)(O,S)}); - } `;return new c.GlslLibRoutine(N,I.dependencies)}const m=d[0],b=d[1],_=Math.ceil(r[2]/2),v=`vec4 ${o}(int b, int row, int col) { - vec2 uv = packedUVfrom3D( - ${b}, ${m}, ${_*Math.ceil(r[1]/2)}, ${_}, b, row, col); - return ${g.texture2D}(${t}, uv);}`;return new c.GlslLibRoutine(v,["coordinates.packedUVfrom3D"])}getPackedSamplerND(o,t,e){const r=e.unpackedShape,i=r.length,d=[e.width,e.height],g=(0,p.getGlsl)(this.context.glContext.version),m=[d[0],d[1]],b=m[1],_=m[0],v=Math.ceil(r[i-1]/2);let w=v*Math.ceil(r[i-2]/2),S="int b, int row, int col",A=`b * ${w} + (row / 2) * ${v} + (col / 2)`;for(let x=2;x<i-1;x++)S=`int b${x}, `+S,w*=r[i-x-1],A=`b${x} * ${w} + `+A;const O=`vec4 ${o}(${S}) { - int index = ${A}; - int texR = index / ${_}; - int texC = index - texR * ${_}; - vec2 uv = (vec2(texC, texR) + halfCR) / vec2(${_}, ${b}); - return ${g.texture2D}(${t}, uv); - }`;return new c.GlslLibRoutine(O)}getUnpackedSamplerScalar(o,t,e){const[r,i]=[e.width,e.height];if(r===1&&i===1){const g=` - float ${o}() { - return sampleTexture(${t}, halfCR); - } - `;return new c.GlslLibRoutine(g,["coordinates.sampleTexture"])}const d=` - float ${o}() { - int offset_${t} = coordsToOffset(TexCoords, ${r}, ${i}); - vec2 uv = uvFromFlat(${r}, ${i}, offset_${t}); - return sampleTexture(${t}, uv); - } - `;return new c.GlslLibRoutine(d,["coordinates.uvFromFlat","coordinates.sampleTexture","coordinates.coordsToOffset"])}getUnpackedSampler1D(o,t,e){const r=e.width,i=e.height;if(i===1&&r===1){const g=` - float ${o}(int index) { - return sampleTexture(${t}, halfCR); - } - `;return new c.GlslLibRoutine(g,["coordinates.sampleTexture"])}if(i===1){const g=` - float ${o}(int index) { - vec2 uv = vec2((float(index) + 0.5) / ${r}.0, 0.5); - return sampleTexture(${t}, uv); - } - `;return new c.GlslLibRoutine(g,["coordinates.sampleTexture"])}if(r===1){const g=` - float ${o}(int index) { - vec2 uv = vec2(0.5, (float(index) + 0.5) / ${i}.0); - return sampleTexture(${t}, uv); - } - `;return new c.GlslLibRoutine(g,["coordinates.sampleTexture"])}const d=` - float ${o}(int index) { - vec2 uv = uvFromFlat(${r}, ${i}, index); - return sampleTexture(${t}, uv); - } - `;return new c.GlslLibRoutine(d,["coordinates.uvFromFlat","coordinates.sampleTexture"])}getUnpackedSampler2D(o,t,e){const r=e.unpackedShape,i=[e.height,e.width];if(i!=null&&u.ArrayUtil.arraysEqual(r,i)){const w=` - float ${o}(int row, int col) { - vec2 uv = (vec2(row, col) + halfCR) / vec2(${i[1]}.0, ${i[0]}.0); - return sampleTexture(${t}, uv); - } - `;return new c.GlslLibRoutine(w,["coordinates.sampleTexture"])}const{newShape:d,keptDims:g}=(0,s.squeezeShape)(r),m=d;if(m.length<r.length){const w=(0,h.squeezeInputShape)(r,m),S=JSON.parse(JSON.stringify(e));S.unpackedShape=w;const A=["col","row"],O=` - ${this.getUnpackedSamplerFromInput(o,t,S).routineBody} - float ${o}(int row, int col) { - return ${o}(${(0,h.getSqueezedParams)(A,g)}); - } - `;return new c.GlslLibRoutine(O,["coordinates.sampleTexture"])}const b=i[1],_=i[0];if(_===1){const w=` - float ${o}(int row, int col) { - int offset_${t} = coordsToOffset(TexCoords, ${b}, ${_}); - float index = dot(vec3(row, col, offset_${t}), vec3(${r[1]}, 1, 1)); - vec2 uv = vec2(0.5, (index + 0.5) / ${b}.0); - return sampleTexture(${t}, uv); - } - `;return new c.GlslLibRoutine(w,["coordinates.sampleTexture","coordinates.coordsToOffset"])}if(b===1){const w=` - float ${o}(int row, int col) { - int offset_${t} = coordsToOffset(TexCoords, ${b}, ${_}); - float index = dot(vec3(row, col, offset_${t}), vec3(${r[1]}, 1, 1)); - vec2 uv = vec2((index + 0.5) / ${_}.0, 0.5); - return sampleTexture(${t}, uv); - } - `;return new c.GlslLibRoutine(w,["coordinates.sampleTexture","coordinates.coordsToOffset"])}const v=` - float ${o}(int row, int col) { - int index = col * ${r[1]} + row; - vec2 uv = uvFromFlat(${b}, ${_}, index); - return sampleTexture(${t}, uv); - } - `;return new c.GlslLibRoutine(v,["coordinates.uvFromFlat","coordinates.sampleTexture","coordinates.coordsToOffset"])}getUnpackedSampler3D(o,t,e){const r=e.unpackedShape,i=r[1]*r[2],d=r[2],{newShape:g,keptDims:m}=(0,s.squeezeShape)(r),b=g;if(b.length<r.length){const v=(0,h.squeezeInputShape)(r,b),w=["batch","col","row"],S=JSON.parse(JSON.stringify(e));S.unpackedShape=v;const A=this.getUnpackedSamplerFromInput(o,t,S),O=m.reverse(),x=` - ${A.routineBody} - float ${o}(int batch, int row, int col) { - return ${o}(${(0,h.getSqueezedParams)(w,O)}); - } - `;return new c.GlslLibRoutine(x,A.dependencies)}const _=` - float ${o}(int depth, int row, int col) { - // Explicitly use integer operations as dot() only works on floats. - int index = depth * ${i} + col * ${d} + row; - vec2 uv = uvFromFlat(${e.width}, ${e.height}, index); - return sampleTexture(${t}, uv); - } - `;return new c.GlslLibRoutine(_,["coordinates.uvFromFlat","coordinates.sampleTexture","coordinates.coordsToOffset"])}getUnpackedSampler4D(o,t,e){const r=e.unpackedShape,i=r[3],d=r[2]*i,g=` - float ${o}(int row, int col, int depth, int depth2) { - int index = row * ${r[1]*d} + col * ${d} + - depth2 * ${i} + depth; - vec2 uv = uvFromFlat(${e.width}, ${e.height}, index); - return sampleTexture(${t}, uv); - } - `;return new c.GlslLibRoutine(g,["coordinates.uvFromFlat","coordinates.sampleTexture"])}getUnpackedSampler5D(o,t,e){const r=e.unpackedShape,i=r[4],d=r[3]*i,g=r[2]*d,m=r[1]*g,{newShape:b,keptDims:_}=(0,s.squeezeShape)(r);if(b.length<r.length){const w=(0,h.squeezeInputShape)(r,b),S=["row","col","depth","depth2","depth3"],A=JSON.parse(JSON.stringify(e));A.unpackedShape=w;const O=` - ${this.getUnpackedSamplerFromInput(o,t,A).routineBody} - float ${o}(int row, int col, int depth, int depth2, int depth3) { - return ${o}(${(0,h.getSqueezedParams)(S,_)}); - } - `;return new c.GlslLibRoutine(O,["coordinates.sampleTexture","coordinates.uvFromFlat"])}const v=` - float ${o}(int row, int col, int depth, int depth2, int depth3) { - int index = row * ${m} + col * ${g} + depth * ${d} + - depth3 * ${i} + depth2; - vec2 uv = uvFromFlat(${e.width}, ${e.height}, index); - return sampleTexture(${t}, uv); - } - `;return new c.GlslLibRoutine(v,["coordinates.sampleTexture","coordinates.uvFromFlat"])}getUnpackedSampler6D(o,t,e){const r=e.unpackedShape,i=r[5],d=r[4]*i,g=r[3]*d,m=r[2]*g,b=r[1]*m,{newShape:_,keptDims:v}=(0,s.squeezeShape)(r);if(_.length<r.length){const S=(0,h.squeezeInputShape)(r,_),A=["row","col","depth","depth2","depth3","depth4"],O=JSON.parse(JSON.stringify(e));O.unpackedShape=S;const x=` - ${this.getUnpackedSamplerFromInput(o,t,O).routineBody} - float ${o}(int row, int col, int depth, - int depth2, int depth3, int depth4) { - return ${o}(${(0,h.getSqueezedParams)(A,v)}); - } - `;return new c.GlslLibRoutine(x,["coordinates.sampleTexture","coordinates.uvFromFlat"])}const w=` - float ${o}(int row, int col, int depth, - int depth2, int depth3, int depth4) { - int index = row * ${b} + col * ${m} + depth * ${g} + - depth2 * ${d} + depth3 * ${i} + depth4; - vec2 uv = uvFromFlat(${e.width}, ${e.height}, index); - return sampleTexture(${t}, uv); - } - `;return new c.GlslLibRoutine(w,["coordinates.uvFromFlat","coordinates.sampleTexture","coordinates.coordsToOffset"])}toVec(){const o=this.context.outputTextureLayout,t=o.shape.length,e=o.strides,r=o.width,i=o.height,d=[];for(let m=0;m<t-1;++m)d.push(` - c[${m}] = offset / ${e[m]};`),d.push(` - offset -= c[${m}] * ${e[m]};`);d.push(` - c[${t-1}] = offset;`);const g=` - void toVec(vec2 texCoords, out int c[${t}]) { - int offset = coordsToOffset(texCoords, ${r}, ${i}); - ${d.join("")} - } - void toVec(int offset, out int c[${t}]) { - ${d.join("")} - } - `;return{toVec:new c.GlslLibRoutine(g,["coordinates.coordsToOffset"])}}valueFrom(){const o={};return this.context.programInfo.inputNames.forEach((t,e)=>{const r=this.context.inputTextureLayouts[e],i=(r.unpackedShape.length>0?r.unpackedShape:r.shape).length;let d=`_${t}`;o[d]=new c.GlslLibRoutine(this.getValueFromSingle(t,i,r.width,r.height,!1),[`shapeUtils.indicesToOffset${d}`,"coordinates.offsetToCoords","fragcolor.getColorAsFloat"]),d+="_T",o[d]=new c.GlslLibRoutine(this.getValueFromSingle(t,i,r.width,r.height,!0),[`shapeUtils.indicesToOffset${d}`,"coordinates.offsetToCoords","fragcolor.getColorAsFloat"])}),o}getValueFromSingle(o,t,e,r,i){let d=`_${o}`;return i&&(d+="_T"),` - float ${d}(int m[${t}]) { - int offset = indicesToOffset${d}(m); - vec2 coords = offsetToCoords(offset, ${e}, ${r}); - float value = getColorAsFloat(${(0,p.getGlsl)(this.context.glContext.version).texture2D}(${o}, coords)); - return value; - } - `}getPackedValueFrom(o,t,e,r,i){let d=`_${o}_Pack`;return i&&(d+="_T"),` - vec4 ${d}(int m[${t}]) { - int offset = indicesToOffset_${o}(m); - vec2 coords = offsetToCoords(offset, ${e}, ${r}); - return ${(0,p.getGlsl)(this.context.glContext.version).texture2D}(${o}, coords); - } - `}}n.CoordsGlslLib=f},8520:(y,n)=>{var a;Object.defineProperty(n,"__esModule",{value:!0}),n.TopologicalSortGlslRoutines=n.GlslLibRoutineNode=n.GlslLibRoutine=n.GlslLib=n.GlslContext=n.FunctionType=void 0,(a=n.FunctionType||(n.FunctionType={}))[a.ValueBased=0]="ValueBased",a[a.Positional=1]="Positional",n.GlslContext=class{constructor(u,c,p,s){this.glContext=u,this.programInfo=c,this.inputTextureLayouts=p,this.outputTextureLayout=s}},n.GlslLib=class{constructor(u){this.context=u}},n.GlslLibRoutine=class{constructor(u,c){this.routineBody=u,this.dependencies=c}},n.GlslLibRoutineNode=class{constructor(u,c,p){this.name=u,this.dependencies=p||[],c&&(this.routineBody=c)}addDependency(u){u&&this.dependencies.push(u)}},n.TopologicalSortGlslRoutines=class{static returnOrderedNodes(u){if(!u||u.length===0)return[];if(u.length===1)return u;const c=new Set,p=new Set,s=new Array;return this.createOrderedNodes(u,c,p,s),s}static createOrderedNodes(u,c,p,s){for(let h=0;h<u.length;++h)this.dfsTraverse(u[h],c,p,s)}static dfsTraverse(u,c,p,s){if(!u||p.has(u.name))return;if(c.has(u.name))throw new Error("Cyclic dependency detected. Can't topologically sort routines needed for shader.");c.add(u.name);const h=u.dependencies;if(h&&h.length>0)for(let f=0;f<h.length;++f)this.dfsTraverse(h[f],c,p,s);s.push(u),p.add(u.name),c.delete(u.name)}}},7341:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.EncodingGlslLib=void 0;const u=a(8520);class c extends u.GlslLib{constructor(s){super(s)}getFunctions(){return Object.assign(Object.assign({},this.encodeFloat32()),this.decodeFloat32())}getCustomTypes(){return{}}encodeFloat32(){return{encode:new u.GlslLibRoutine(`highp vec4 encode(highp float f) { - return vec4(f, 0.0, 0.0, 0.0); - } - `)}}decodeFloat32(){return{decode:new u.GlslLibRoutine(`highp float decode(highp vec4 rgba) { - return rgba.r; - } - `)}}encodeUint8(){const s=c.isLittleEndian()?"rgba.rgba=rgba.abgr;":"";return{encode:new u.GlslLibRoutine(` - highp vec4 encode(highp float f) { - highp float F = abs(f); - highp float Sign = step(0.0,-f); - highp float Exponent = floor(log2(F)); - highp float Mantissa = (exp2(- Exponent) * F); - Exponent = floor(log2(F) + 127.0) + floor(log2(Mantissa)); - highp vec4 rgba; - rgba[0] = 128.0 * Sign + floor(Exponent*exp2(-1.0)); - rgba[1] = 128.0 * mod(Exponent,2.0) + mod(floor(Mantissa*128.0),128.0); - rgba[2] = floor(mod(floor(Mantissa*exp2(23.0 -8.0)),exp2(8.0))); - rgba[3] = floor(exp2(23.0)*mod(Mantissa,exp2(-15.0))); - ${s} - rgba = rgba / 255.0; // values need to be normalized to [0,1] - return rgba; - } - `)}}decodeUint8(){const s=c.isLittleEndian()?"rgba.rgba=rgba.abgr;":"";return{decode:new u.GlslLibRoutine(` - highp float decode(highp vec4 rgba) { - rgba = rgba * 255.0; // values need to be de-normalized from [0,1] to [0,255] - ${s} - highp float Sign = 1.0 - step(128.0,rgba[0])*2.0; - highp float Exponent = 2.0 * mod(rgba[0],128.0) + step(128.0,rgba[1]) - 127.0; - highp float Mantissa = mod(rgba[1],128.0)*65536.0 + rgba[2]*256.0 +rgba[3] + float(0x800000); - highp float Result = Sign * exp2(Exponent) * (Mantissa * exp2(-23.0 )); - return Result; - } - `)}}static isLittleEndian(){const s=new ArrayBuffer(4),h=new Uint32Array(s),f=new Uint8Array(s);if(h[0]=3735928559,f[0]===239)return!0;if(f[0]===222)return!1;throw new Error("unknown endianness")}}n.EncodingGlslLib=c},9894:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.FragColorGlslLib=void 0;const u=a(8520),c=a(5060);class p extends u.GlslLib{constructor(h){super(h)}getFunctions(){return Object.assign(Object.assign({},this.setFragColor()),this.getColorAsFloat())}getCustomTypes(){return{}}setFragColor(){const h=(0,c.getGlsl)(this.context.glContext.version);return{setFragColor:new u.GlslLibRoutine(` - void setFragColor(float value) { - ${h.output} = encode(value); - } - `,["encoding.encode"])}}getColorAsFloat(){return{getColorAsFloat:new u.GlslLibRoutine(` - float getColorAsFloat(vec4 color) { - return decode(color); - } - `,["encoding.decode"])}}}n.FragColorGlslLib=p},2848:(y,n)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.replaceInlines=void 0;const a=/@inline[\s\n\r]+(\w+)[\s\n\r]+([0-9a-zA-Z_]+)\s*\(([^)]*)\)\s*{(([^}]|[\n\r])*)}/gm;n.replaceInlines=function(u){const c={};let p;for(;(p=a.exec(u))!==null;){const s=p[3].split(",").map(h=>{const f=h.trim().split(" ");return f&&f.length===2?{type:f[0],name:f[1]}:null}).filter(h=>h!==null);c[p[2]]={params:s,body:p[4]}}for(const s in c){const h="(\\w+)?\\s+([_0-9a-zA-Z]+)\\s+=\\s+__FUNC__\\((.*)\\)\\s*;".replace("__FUNC__",s),f=new RegExp(h,"gm");for(;(p=f.exec(u))!==null;){const l=p[1],o=p[2],t=p[3].split(","),e=l?`${l} ${o};`:"";let r=c[s].body,i="";c[s].params.forEach((g,m)=>{g&&(i+=`${g.type} ${g.name} = ${t[m]}; -`)}),r=`${i} - ${r}`,r=r.replace("return",`${o} = `);const d=` - ${e} - { - ${r} - } - `;u=u.replace(p[0],d)}}return u.replace(a,"")}},8879:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.GlslPreprocessor=void 0;const u=a(8520),c=a(2848),p=a(5483),s=a(5060);n.GlslPreprocessor=class{constructor(h,f,l,o){this.libs={},this.glslLibRoutineDependencyGraph={},this.context=new u.GlslContext(h,f,l,o),Object.keys(p.glslRegistry).forEach(e=>{const r=new p.glslRegistry[e](this.context);this.libs[e]=r});const t=this.glslLibRoutineDependencyGraph;for(const e in this.libs){const r=this.libs[e].getFunctions();for(const i in r){const d=e+"."+i;let g;t[d]?(g=t[d],g.routineBody=r[i].routineBody):(g=new u.GlslLibRoutineNode(d,r[i].routineBody),t[d]=g);const m=r[i].dependencies;if(m)for(let b=0;b<m.length;++b)if(t[m[b]])g.addDependency(t[m[b]]);else{const _=new u.GlslLibRoutineNode(m[b]);t[m[b]]=_,g.addDependency(_)}}}}preprocess(){const h=this.context.programInfo;let f=h.shaderSource;return this.context.programInfo.hasMain||(f=`${f} - ${(0,s.getDefaultFragShaderMain)(this.context.glContext.version,this.context.outputTextureLayout.shape.length)}`),f=(0,c.replaceInlines)(f),`${(0,s.getFragShaderPreamble)(this.context.glContext.version)} - ${this.getUniforms(h.inputNames,h.variables)} - ${this.getImports(f)} - ${f}`}getImports(h){const f=this.selectGlslLibRoutinesToBeIncluded(h);if(f.length===0)return"";let l="";for(let o=0;o<f.length;++o){if(!f[o].routineBody)throw new Error(`Missing body for the Glsl Library routine: ${f[o].name}`);l+=f[o].routineBody+` -`}return l}selectGlslLibRoutinesToBeIncluded(h){const f=[];return Object.keys(this.glslLibRoutineDependencyGraph).forEach(l=>{const o=l.split(".")[1];h.indexOf(o)!==-1&&f.push(this.glslLibRoutineDependencyGraph[l])}),u.TopologicalSortGlslRoutines.returnOrderedNodes(f)}getUniforms(h,f){const l=[];if(h)for(const o of h)l.push(`uniform sampler2D ${o};`);if(f)for(const o of f)l.push(`uniform ${o.type} ${o.name}${o.arrayLength?`[${o.arrayLength}]`:""};`);return l.join(` -`)}}},5483:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.glslRegistry=void 0;const u=a(5107),c=a(7341),p=a(9894),s=a(2655),h=a(3891);n.glslRegistry={encoding:c.EncodingGlslLib,fragcolor:p.FragColorGlslLib,vec:h.VecGlslLib,shapeUtils:s.ShapeUtilsGlslLib,coordinates:u.CoordsGlslLib}},2655:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.ShapeUtilsGlslLib=void 0;const u=a(8520);class c extends u.GlslLib{constructor(s){super(s)}getFunctions(){return Object.assign(Object.assign(Object.assign(Object.assign(Object.assign({},this.bcastIndex()),this.bcastMatmulIndex()),this.offsetToIndices()),this.indicesToOffset()),this.incrementIndices())}getCustomTypes(){return{}}bcastIndex(){const s=this.context.outputTextureLayout.shape.length,h={};return this.context.programInfo.inputNames.forEach((f,l)=>{const o=this.context.inputTextureLayouts[l].unpackedShape;if(o.length<=s){const t=o.length,e=s-t,r=`bcastIndices_${f}`;let i="";for(let g=0;g<t;++g)i+=` - realIndices[${g}] = int( mod(float(bcastedIndices[${e+g}]), ${o[g]}.0) ); - `;const d=` - void ${r} (int bcastedIndices[${s}], out int realIndices[${t}]) { - ${i} - } - `;h[r]=new u.GlslLibRoutine(d)}}),h}bcastMatmulIndex(){const s=this.context.outputTextureLayout.shape.length,h={};return this.context.programInfo.inputNames.forEach((f,l)=>{const o=this.context.inputTextureLayouts[l].shape;if(!(o.length<2||o.length>s)){const t=o.length,e=s-t,r=`bcastMatmulIndices_${f}`;let i="";for(let g=0;g<t-2;++g)i+=` - realIndices[${g}] = int( mod(float(bcastedIndices[${e+g}]), ${o[g]}.0) ); - `;const d=` - void ${r}(int bcastedIndices[${s}], out int realIndices[${t}]) { - ${i} - realIndices[${t-1}] = bcastedIndices[${s-1}]; - realIndices[${t-2}] = bcastedIndices[${s-2}]; - } - `;h[r]=new u.GlslLibRoutine(d)}}),h}indicesToOffset(){const s={};return this.context.programInfo.inputNames.forEach((h,f)=>{const l=this.context.inputTextureLayouts[f].shape,o=this.context.inputTextureLayouts[f].strides,t=l.length;let e=`indicesToOffset_${h}`;s[e]=new u.GlslLibRoutine(c.indexToOffsetSingle(e,t,o)),e=`indicesToOffset_${h}_T`,s[e]=new u.GlslLibRoutine(c.indexToOffsetSingle(e,t,o.slice().reverse()))}),s}static indexToOffsetSingle(s,h,f){let l="";for(let o=h-1;o>=0;--o)l+=` - offset += indices[${o}] * ${f[o]}; - `;return` - int ${s}(int indices[${h}]) { - int offset = 0; - ${l} - return offset; - } - `}offsetToIndices(){const s={};return this.context.programInfo.inputNames.forEach((h,f)=>{const l=this.context.inputTextureLayouts[f].shape,o=this.context.inputTextureLayouts[f].strides,t=l.length;let e=`offsetToIndices_${h}`;s[e]=new u.GlslLibRoutine(c.offsetToIndicesSingle(e,t,o)),e=`offsetToIndices_${h}_T`,s[e]=new u.GlslLibRoutine(c.offsetToIndicesSingle(e,t,o.slice().reverse()))}),s}static offsetToIndicesSingle(s,h,f){const l=[];for(let o=0;o<h-1;++o)l.push(` - indices[${o}] = offset / ${f[o]};`),l.push(` - offset -= indices[${o}] * ${f[o]};`);return l.push(` - indices[${h-1}] = offset;`),` - void ${s}(int offset, out int indices[${h}]) { - ${l.join("")} - } - `}incrementIndices(){const s={};return this.context.programInfo.inputNames.forEach((h,f)=>{const l=this.context.inputTextureLayouts[f].shape,o=l.length,t=`incrementIndices_${h}`;let e="";for(let i=0;i<o;++i)e+=` - shape[${i}] = ${l[i]};`;const r=` - void ${t}(int axis, out int indices[${o}]) { - int shape[${o}]; - ${e}; - for(int i = ${o} -1 ; i >= 0; --i) { - if(i > axis) continue; - indices[i] += 1; - if(indices[i] < shape[i]) { - break; - } - indices[i] = 0; - } - } - `;s[t]=new u.GlslLibRoutine(r)}),s}}n.ShapeUtilsGlslLib=c},5060:(y,n)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.getDefaultFragShaderMain=n.getFragShaderPreamble=n.getVertexShaderSource=n.getGlsl=void 0;const a={version:"",attribute:"attribute",varyingVertex:"varying",varyingFrag:"varying",texture2D:"texture2D",output:"gl_FragColor",outputDeclaration:""},u={version:"#version 300 es",attribute:"in",varyingVertex:"out",varyingFrag:"in",texture2D:"texture",output:"outputColor",outputDeclaration:"out vec4 outputColor;"};function c(p){return p===1?a:u}n.getGlsl=c,n.getVertexShaderSource=function(p){const s=c(p);return`${s.version} - precision highp float; - ${s.attribute} vec3 position; - ${s.attribute} vec2 textureCoord; - - ${s.varyingVertex} vec2 TexCoords; - - void main() - { - gl_Position = vec4(position, 1.0); - TexCoords = textureCoord; - }`},n.getFragShaderPreamble=function(p){const s=c(p);return`${s.version} - precision highp float; - precision highp int; - precision highp sampler2D; - ${s.varyingFrag} vec2 TexCoords; - ${s.outputDeclaration} - const vec2 halfCR = vec2(0.5, 0.5); - - // Custom vector types to handle higher dimenalities. - struct ivec5 - { - int x; - int y; - int z; - int w; - int u; - }; - - struct ivec6 - { - int x; - int y; - int z; - int w; - int u; - int v; - }; - - int imod(int x, int y) { - return x - y * (x / y); - } - - `},n.getDefaultFragShaderMain=function(p,s){return` - void main() { - int indices[${s}]; - toVec(TexCoords, indices); - vec4 result = vec4(process(indices)); - ${c(p).output} = result; - } - `}},3891:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.VecGlslLib=void 0;const u=a(8520);class c extends u.GlslLib{constructor(s){super(s)}getCustomTypes(){return{}}getFunctions(){return Object.assign(Object.assign(Object.assign(Object.assign({},this.binaryVecFunctions()),this.copyVec()),this.setVecItem()),this.getVecItem())}binaryVecFunctions(){const s=this.context.outputTextureLayout.shape.length,h={add:"+=",sub:"-=",mul:"*=",div:"/="},f={};for(const l in h){const o=`${l}Vec`;let t="";for(let r=0;r<s;++r)t+=` - dest[${r}] ${h[l]} src[${r}]; - `;const e=` - void ${o}(int src[${s}], out int dest[${s}]) { - ${t} - } - `;f[o]=new u.GlslLibRoutine(e)}return f}copyVec(){const s=this.context.outputTextureLayout.shape.length;let h="";for(let l=0;l<s;++l)h+=` - dest[${l}] = src[${l}]; - `;const f=` - void copyVec(int src[${s}], out int dest[${s}]) { - ${h} - } - `;return{copyVec:new u.GlslLibRoutine(f)}}setVecItem(){const s=this.context.outputTextureLayout.shape.length;let h=` - if(index < 0) - index =${s} + index; - if (index == 0) - m[0] = value; - `;for(let l=1;l<s-1;++l)h+=` - else if (index == ${l}) - m[${l}] = value; - `;h+=` - else - m[${s-1}] = value; - `;const f=` - void setVecItem(out int m[${s}], int index, int value) { - ${h} - } - `;return{setVecItem:new u.GlslLibRoutine(f)}}getVecItem(){const s=this.context.outputTextureLayout.shape.length;let h=` - if(index < 0) - index = ${s} + index; - if (index == 0) - return m[0]; - `;for(let l=1;l<s-1;++l)h+=` - else if (index == ${l}) - return m[${l}]; - `;h+=` - else - return m[${s-1}]; - `;const f=` - int getVecItem(int m[${s}], int index) { - ${h} - } - `;return{getVecItem:new u.GlslLibRoutine(f)}}}n.VecGlslLib=c},8316:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.WebGLInferenceHandler=void 0;const u=a(6231),c=a(9162),p=a(2517),s=a(2403),h=a(7019),f=a(8710),l=a(5611),o=a(4057),t=a(2039);n.WebGLInferenceHandler=class{constructor(e){this.session=e,this.packedTextureDataCache=new Map,this.unpackedTextureDataCache=new Map}calculateTextureWidthAndHeight(e,r){return(0,o.calculateTextureWidthAndHeight)(this.session.layoutStrategy,e,r)}executeProgram(e,r){if(r.length<e.inputNames.length)throw new Error(`Input size mustn't be less than ${e.inputNames.length}.`);if(e.inputNames.length!==e.inputTypes.length)throw new Error("input names size does not match input types");const i=[];for(let v=0;v<e.inputNames.length;++v)i[v]=this.getOrCreateTextureData(r[v],e.inputTypes[v]);const d=((v,w)=>{const S=w.map(O=>`${O.unpackedShape.join(",")};${O.width}x${O.height}`).join("_");let A=v.name;return v.cacheHint&&(A+="["+v.cacheHint+"]"),A+=":"+S,A})(e,i);let g=this.session.programManager.getArtifact(d);const m=g?g.programInfo:typeof e.get=="function"?e.get():e,b=(0,o.createTextureLayoutFromTextureType)(this.session.layoutStrategy,m.output.dims,m.output.textureType),_=this.createTextureData(b,m.output.type);return g||(g=this.session.programManager.build(m,i,_),this.session.programManager.setArtifact(d,g)),this.runProgram(g,i,_),_}run(e,r){return this.executeProgram(e,r).tensor}runProgram(e,r,i){for(let d=0;d<r.length;++d)if(!!r[d].isPacked!=(e.programInfo.inputTypes[d]===t.TextureType.packed))throw new Error(`input[${d}] property packed inconsistent`);if(!!i.isPacked!=(e.programInfo.output.textureType===t.TextureType.packed))throw new Error("output property packed inconsistent");this.session.programManager.run(e,r,i)}getOrCreateTextureData(e,r){let i=this.getTextureData(e.dataId,r===t.TextureType.packed);if(!i&&(i=this.getTextureData(e.dataId,r!==t.TextureType.packed),i))return r===t.TextureType.packed?this.pack(i):this.unpack(i);if(!i){const d=(0,o.createTextureLayoutFromTextureType)(this.session.layoutStrategy,e.dims,r);if(r===t.TextureType.packedLastDimension){const b=e.dims;if(b.length===4){const _=[b[0],Math.ceil(b[1]*b[2]*b[3]/4)],v=(0,o.createTextureLayoutFromTextureType)(this.session.layoutStrategy,_,r);let w=e.numberData;if(b[1]*b[2]*b[3]%4!=0){const S=b[0],A=b[1]*b[2]*b[3],O=Math.ceil(A*1/4)*4;w=new Float32Array(S*O);for(let x=0;x<S;++x){const I=x*A,N=x*O+x%1*A;w.set(e.numberData.subarray(I,I+A),N)}}return this.createTextureData(v,e.type,w,e,1)}}if(r===t.TextureType.packed){const g=(0,o.createTextureLayoutFromShape)(this.session.layoutStrategy,e.dims,1,[],{reverseWH:!0}),m=this.createTextureData(g,e.type,e.numberData,e,1);i=this.pack(m)}else i=this.createTextureData(d,e.type,e.numberData,e,1)}return i}createTextureDataFromLayoutBindTensor(e,r,i,d){return this.createTextureData(e,r,i,d,1)}createTextureData(e,r,i,d,g){u.Logger.verbose("InferenceHandler",`Creating TextureData: layout:[${JSON.stringify(e)}]`);const m=this.session.textureManager.createTextureFromLayout(r,e,i,g);return this.createTextureDataFromTexture(e,r,m,d)}reshapeUnpacked(e,r){const i=this.getOrCreateTextureData(e,t.TextureType.unpacked),d={channels:i.channels,height:i.height,width:i.width,shape:r.length!==0?r:[1],strides:p.ShapeUtil.computeStrides(r),unpackedShape:r};return this.createTextureDataFromTexture(d,e.type,i.texture).tensor}reshapePacked(e,r){const i=this.getOrCreateTextureData(e,t.TextureType.packed);if((0,h.isReshapeCheap)(e.dims,r)){const _={channels:i.channels,height:i.height,width:i.width,shape:r.length!==0?r:[1],strides:p.ShapeUtil.computeStrides(r),unpackedShape:r,isPacked:!0};return this.createTextureDataFromTexture(_,e.type,i.texture).tensor}const d=(0,h.processDims3D)(e.dims),g=(0,h.processDims3D)(r),m=this.reshapePacked(e,d),b=this.run((0,h.createPackedReshape3DProgramInfoLoader)(this,m,g),[m]);return this.reshapePacked(b,r)}cast(e,r){const i=this.getOrCreateTextureData(e,t.TextureType.unpacked);return this.createTextureDataFromTexture(i,r,i.texture).tensor}createTextureDataFromTexture(e,r,i,d,g){const m=Object.assign(Object.assign({},e),{tensor:d||new c.Tensor(e.unpackedShape,r,b=>this.readTexture(m),async b=>this.readTextureAsync(m),void 0,g),texture:i});return this.setTextureData(m.tensor.dataId,m,e.isPacked),m}getTextureData(e,r=!1){return this.session.isInitializer(e)?this.session.getTextureData(e,r):r?this.packedTextureDataCache.get(e):this.unpackedTextureDataCache.get(e)}setTextureData(e,r,i=!1){this.session.isInitializer(e)?this.session.setTextureData(e,r,i):(i?this.packedTextureDataCache:this.unpackedTextureDataCache).set(e,r)}isTextureLayoutCached(e,r=!1){return!!this.getTextureData(e.dataId,r)}dispose(){this.session.textureManager.clearActiveTextures(),this.packedTextureDataCache.forEach(e=>this.session.textureManager.releaseTexture(e)),this.packedTextureDataCache=new Map,this.unpackedTextureDataCache.forEach(e=>this.session.textureManager.releaseTexture(e)),this.unpackedTextureDataCache=new Map}readTexture(e){return e.isPacked?this.readTexture(this.unpack(e)):this.session.backend.glContext.isFloat32DownloadSupported?this.session.textureManager.readTexture(e,e.tensor.type,e.channels):this.session.textureManager.readUint8TextureAsFloat((0,f.encodeAsUint8)(this,e))}async readTextureAsync(e){return e.isPacked?this.readTextureAsync(this.unpack(e)):this.session.backend.glContext.isFloat32DownloadSupported?this.session.textureManager.readTextureAsync(e,e.tensor.type,e.channels):this.session.textureManager.readUint8TextureAsFloat((0,f.encodeAsUint8)(this,e))}pack(e){return this.executeProgram((0,s.createPackProgramInfoLoader)(this,e.tensor),[e.tensor])}unpack(e){return this.executeProgram((0,l.createUnpackProgramInfoLoader)(this,e.tensor),[e.tensor])}}},1640:function(y,n,a){var u=this&&this.__createBinding||(Object.create?function(X,J,ee,ue){ue===void 0&&(ue=ee);var Ae=Object.getOwnPropertyDescriptor(J,ee);Ae&&!("get"in Ae?!J.__esModule:Ae.writable||Ae.configurable)||(Ae={enumerable:!0,get:function(){return J[ee]}}),Object.defineProperty(X,ue,Ae)}:function(X,J,ee,ue){ue===void 0&&(ue=ee),X[ue]=J[ee]}),c=this&&this.__setModuleDefault||(Object.create?function(X,J){Object.defineProperty(X,"default",{enumerable:!0,value:J})}:function(X,J){X.default=J}),p=this&&this.__importStar||function(X){if(X&&X.__esModule)return X;var J={};if(X!=null)for(var ee in X)ee!=="default"&&Object.prototype.hasOwnProperty.call(X,ee)&&u(J,X,ee);return c(J,X),J};Object.defineProperty(n,"__esModule",{value:!0}),n.WEBGL_OP_RESOLVE_RULES=void 0;const s=a(2898),h=p(a(7839)),f=a(4196),l=a(2069),o=a(8138),t=a(9663),e=a(5193),r=a(7992),i=a(1253),d=a(4776),g=a(6572),m=a(3346),b=a(5623),_=a(2870),v=a(2143),w=a(4939),S=a(718),A=a(2268),O=a(8117),x=a(2278),I=a(5524),N=a(5975),B=a(3933),L=a(6558),F=a(5723),H=a(3738),D=p(a(4909)),j=a(8428),Z=a(9793);n.WEBGL_OP_RESOLVE_RULES=[["Abs","","6+",D.abs],["Acos","","7+",D.acos],["Add","","7+",h.add],["And","","7+",h.and],["Asin","","7+",D.asin],["Atan","","7+",D.atan],["AveragePool","","7+",v.averagePool,v.parseAveragePoolAttributes],["BatchNormalization","","7+",s.batchNormalization,s.parseBatchNormalizationAttributes],["Cast","","6+",f.cast,f.parseCastAttributes],["Ceil","","6+",D.ceil],["Clip","","6-10",D.clip,D.parseClipAttributes],["Clip","","11+",D.clipV11],["Concat","","4+",l.concat,l.parseConcatAttributes],["Conv","","1+",o.conv,o.parseConvAttributes],["ConvTranspose","","1+",t.convTranspose,t.parseConvTransposeAttributes],["Cos","","7+",D.cos],["Div","","7+",h.div],["Dropout","","7+",D.identity],["DepthToSpace","","1+",e.depthToSpace,e.parseDepthToSpaceAttributes],["Equal","","7+",h.equal],["Elu","","6+",D.elu,D.parseEluAttributes],["Exp","","6+",D.exp],["Flatten","","1+",r.flatten,r.parseFlattenAttributes],["Floor","","6+",D.floor],["FusedConv","com.microsoft","1+",o.conv,o.parseConvAttributes],["Gather","","1+",i.gather,i.parseGatherAttributes],["Gemm","","7-10",d.gemm,d.parseGemmAttributesV7],["Gemm","","11+",d.gemm,d.parseGemmAttributesV11],["GlobalAveragePool","","1+",v.globalAveragePool,v.parseGlobalAveragePoolAttributes],["GlobalMaxPool","","1+",v.globalMaxPool],["Greater","","7+",h.greater],["Identity","","1+",D.identity],["ImageScaler","","1+",g.imageScaler,g.parseImageScalerAttributes],["InstanceNormalization","","6+",m.instanceNormalization,m.parseInstanceNormalizationAttributes],["LeakyRelu","","6+",D.leakyRelu,D.parseLeakyReluAttributes],["Less","","7+",h.less],["Log","","6+",D.log],["MatMul","","1+",b.matMul,b.parseMatMulAttributes],["MaxPool","","1+",v.maxPool,v.parseMaxPoolAttributes],["Mul","","7+",h.mul],["Neg","","6+",D.neg],["Not","","1+",D.not],["Or","","7+",h.or],["Pad","","2-10",_.padV2,_.parsePadAttributesV2],["Pad","","11+",_.padV11,_.parsePadAttributesV11],["Pow","","7+",h.pow],["PRelu","","7+",h.pRelu],["ReduceLogSum","","1+",w.reduceLogSum,w.parseReduceAttributes],["ReduceMax","","1+",w.reduceMax,w.parseReduceAttributes],["ReduceMean","","1+",w.reduceMean,w.parseReduceAttributes],["ReduceMin","","1+",w.reduceMin,w.parseReduceAttributes],["ReduceProd","","1+",w.reduceProd,w.parseReduceAttributes],["ReduceSum","","1-12",w.reduceSum,w.parseReduceAttributes],["ReduceSumSquare","","1+",w.reduceLogSumSquare,w.parseReduceAttributes],["Relu","","6+",D.relu],["Reshape","","5+",S.reshape],["Resize","","10",A.resize,A.parseResizeAttributesV10],["Resize","","11+",A.resize,A.parseResizeAttributesV11],["Shape","","1+",O.shape],["Sigmoid","","6+",D.sigmoid],["Sin","","7+",D.sin],["Slice","","10+",x.sliceV10],["Slice","","1-9",x.slice,x.parseSliceAttributes],["Softmax","","1-12",I.softmax,I.parseSoftmaxAttributes],["Softmax","","13+",I.softmaxV13,I.parseSoftmaxAttributesV13],["Split","","2-12",N.split,N.parseSplitAttributes],["Sqrt","","6+",D.sqrt],["Squeeze","","1-12",B.squeeze,B.parseSqueezeAttributes],["Squeeze","","13+",B.squeezeV13],["Sub","","7+",h.sub],["Sum","","6+",L.sum],["Tan","","7+",D.tan],["Tanh","","6+",D.tanh],["Tile","","6+",F.tile],["Transpose","","1+",H.transpose,H.parseTransposeAttributes],["Upsample","","7-8",Z.upsample,Z.parseUpsampleAttributesV7],["Upsample","","9",Z.upsample,Z.parseUpsampleAttributesV9],["Unsqueeze","","1-12",j.unsqueeze,j.parseUnsqueezeAttributes],["Unsqueeze","","13+",j.unsqueezeV13],["Xor","","7+",h.xor]]},2898:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseBatchNormalizationAttributes=n.batchNormalization=void 0;const u=a(246),c=a(5060),p=a(2039),s={name:"BatchNormalization",inputNames:["A","Scale","B","Mean","Variance"],inputTypes:[p.TextureType.unpacked,p.TextureType.unpacked,p.TextureType.unpacked,p.TextureType.unpacked,p.TextureType.unpacked]};n.batchNormalization=(l,o,t)=>(f(o),[l.run(Object.assign(Object.assign({},s),{cacheHint:t.cacheKey,get:()=>h(l,o,t)}),o)]),n.parseBatchNormalizationAttributes=l=>{const o=l.attributes.getFloat("epsilon",1e-5),t=l.attributes.getFloat("momentum",.9),e=l.attributes.getInt("spatial",1);return(0,u.createAttributeWithCacheKey)({epsilon:o,momentum:t,spatial:e})};const h=(l,o,t)=>{const e=(0,c.getGlsl)(l.session.backend.glContext.version),r=o[0].dims.length,[i,d]=l.calculateTextureWidthAndHeight(o[1].dims,p.TextureType.unpacked),g=` - float process(int[${r}] indices) { - vec2 position = offsetToCoords(indices[1], ${i}, ${d}); - float scale = getColorAsFloat(${e.texture2D}(Scale, position)); - float mean = getColorAsFloat(${e.texture2D}(Mean, position)); - float variance = getColorAsFloat(${e.texture2D}(Variance, position)); - float b = getColorAsFloat(${e.texture2D}(B, position)); - - return scale * ( (_A(indices) - mean) / sqrt(variance + float(${t.epsilon})) ) + b; - }`;return Object.assign(Object.assign({},s),{output:{dims:o[0].dims,type:o[0].type,textureType:p.TextureType.unpacked},shaderSource:g})},f=l=>{if(!l||l.length!==5)throw new Error("BatchNormalization requires 5 inputs.");const o=l[0],t=l[1],e=l[2],r=l[3],i=l[4];if(o.dims.length<3||t.dims.length!==1||e.dims.length!==1||r.dims.length!==1||i.dims.length!==1)throw new Error("invalid input shape.");if(t.dims[0]!==o.dims[1]||e.dims[0]!==o.dims[1]||r.dims[0]!==o.dims[1]||i.dims[0]!==o.dims[1])throw new Error("invalid input shape.");if(o.type!=="float32"&&o.type!=="float64"||t.type!=="float32"&&t.type!=="float64"||e.type!=="float32"&&e.type!=="float64"||r.type!=="float32"&&r.type!=="float64"||i.type!=="float32"&&i.type!=="float64")throw new Error("invalid input tensor types.")}},7839:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.xor=n.sub=n.pRelu=n.pow=n.or=n.mul=n.less=n.greater=n.equal=n.div=n.and=n.add=n.glslPRelu=n.glslPow=n.glslXor=n.glslOr=n.glslAnd=n.glslLess=n.glslGreater=n.glslEqual=n.glslSub=n.glslMul=n.glslDiv=n.glslAdd=void 0;const u=a(2517),c=a(8520),p=a(5060),s=a(2039);function h(){const w="add_";return{body:` - float ${w}(float a, float b) { - return a + b; - } - vec4 ${w}(vec4 v1, vec4 v2) { - return v1 + v2; - } - `,name:w,type:c.FunctionType.ValueBased}}function f(){const w="div_";return{body:` - float ${w}(float a, float b) { - return a / b; - } - vec4 ${w}(vec4 v1, vec4 v2) { - return v1 / v2; - } - `,name:w,type:c.FunctionType.ValueBased}}function l(){const w="mul_";return{body:` - float ${w}(float a, float b) { - return a * b; - } - vec4 ${w}(vec4 v1, vec4 v2) { - return v1 * v2; - } - `,name:w,type:c.FunctionType.ValueBased}}function o(){const w="sub_";return{body:` - float ${w}(float a, float b) { - return a - b; - } - vec4 ${w}(vec4 v1, vec4 v2) { - return v1 - v2; - } - `,name:w,type:c.FunctionType.ValueBased}}function t(){const w="equal_";return{body:` - float ${w}(float a, float b) { - return float(a == b); - } - vec4 ${w}(vec4 v1, vec4 v2) { - return vec4(equal(v1, v2)); - } - `,name:w,type:c.FunctionType.ValueBased}}function e(){const w="greater_";return{body:` - float ${w}(float a, float b) { - return float(a > b); - } - vec4 ${w}(vec4 v1, vec4 v2) { - return vec4( v1.r > v2.r , - v1.g > v2.g, - v1.b > v2.b, - v1.a > v2.a ); - } - `,name:w,type:c.FunctionType.ValueBased}}function r(){const w="less_";return{body:` - float ${w}(float a, float b) { - return float(a < b); - } - vec4 ${w}(vec4 v1, vec4 v2) { - return vec4( v1.r < v2.r , - v1.g < v2.g, - v1.b < v2.b, - v1.a < v2.a ); - } - `,name:w,type:c.FunctionType.ValueBased}}function i(){const w="and_";return{body:` - float ${w}(float a, float b) { - return float( bool(a) && bool(b) ); - } - vec4 ${w}(vec4 v1, vec4 v2) { - bvec4 b1 = bvec4(v1); - bvec4 b2 = bvec4(v2); - return vec4( b1.r && b2.r , - b1.g && b2.g, - b1.b && b2.b, - b1.a && b2.a ); - } - `,name:w,type:c.FunctionType.ValueBased}}function d(){const w="or_";return{body:` - float ${w}(float a, float b) { - return float( bool(a) || bool(b) ); - } - vec4 ${w}(vec4 v1, vec4 v2) { - bvec4 b1 = bvec4(v1); - bvec4 b2 = bvec4(v2); - return vec4( b1.r || b2.r , - b1.g || b2.g, - b1.b || b2.b, - b1.a || b2.a ); - } - `,name:w,type:c.FunctionType.ValueBased}}function g(){const w="xor_";return{body:` - float ${w}(float a, float b) { - return float( bool(a) ^^ bool(b) ); - } - vec4 ${w}(vec4 v1, vec4 v2) { - bvec4 b1 = bvec4(v1); - bvec4 b2 = bvec4(v2); - return vec4( b1.r ^^ b2.r , - b1.g ^^ b2.g, - b1.b ^^ b2.b, - b1.a ^^ b2.a ); - } - `,name:w,type:c.FunctionType.ValueBased}}function m(){return function(w){const S=`${w}_`;return{body:` - float ${S}(float a, float b) { - return ${w}(a, b); - } - vec4 ${S}(vec4 v1, vec4 v2) { - return ${w}(v1, v2); - } - `,name:S,type:c.FunctionType.ValueBased}}("pow")}function b(){const w="prelu_";return{body:` - float ${w}(float a, float b) { - return a < 0.0 ? a * b: a; - } - vec4 ${w}(vec4 v1, vec4 v2) { - return vec4( - v1.r < 0.0 ? v1.r * v2.r: v1.r, - v1.g < 0.0 ? v1.g * v2.g: v1.g, - v1.b < 0.0 ? v1.b * v2.b: v1.b, - v1.a < 0.0 ? v1.a * v2.a: v1.a - ); - } - `,name:w,type:c.FunctionType.ValueBased}}n.glslAdd=h,n.glslDiv=f,n.glslMul=l,n.glslSub=o,n.glslEqual=t,n.glslGreater=e,n.glslLess=r,n.glslAnd=i,n.glslOr=d,n.glslXor=g,n.glslPow=m,n.glslPRelu=b;const _=(w,S,A,O=S[0].type,x)=>{const I=w.session.pack?s.TextureType.packed:s.TextureType.unpacked;return{name:A.name,inputNames:["A","B"],inputTypes:[I,I],cacheHint:x,get:()=>v(w,S,A,O)}},v=(w,S,A,O=S[0].type)=>{const x=w.session.pack?s.TextureType.packed:s.TextureType.unpacked,I=!u.ShapeUtil.areEqual(S[0].dims,S[1].dims);let N=S[0].dims;const B=w.session.pack;if(I){const H=u.BroadcastUtil.calcShape(S[0].dims,S[1].dims,!1);if(!H)throw new Error("Can't perform binary op on the given tensors");N=H;const D=N.length,j=S[0].dims.length!==0?S[0].dims.length:1,Z=S[1].dims.length!==0?S[1].dims.length:1,X=S[0].dims.length!==0?"bcastIndices_A(indices, aindices);":"aindices[0] = 0;",J=S[1].dims.length!==0?"bcastIndices_B(indices, bindices);":"bindices[0] = 0;",ee=(0,p.getGlsl)(w.session.backend.glContext.version),ue=B?` - ${A.body} - void main() { - vec4 a = getAAtOutCoords(); - vec4 b = getBAtOutCoords(); - vec4 result = ${A.name}(a, b); - ${ee.output} = result; - }`:` - ${A.body} - float process(int indices[${D}]) { - int aindices[${j}]; - int bindices[${Z}]; - ${X} - ${J} - return ${A.name}(_A(aindices), _B(bindices)); - }`;return{name:A.name,inputNames:["A","B"],inputTypes:[x,x],output:{dims:N,type:O,textureType:x},shaderSource:ue,hasMain:B}}const L=(0,p.getGlsl)(w.session.backend.glContext.version),F=` - ${A.body} - void main() { - vec4 v1 = ${L.texture2D}(A, TexCoords); - vec4 v2 = ${L.texture2D}(B, TexCoords); - vec4 result = ${A.name}(v1, v2); - ${L.output} = result; - } - `;return{name:A.name,inputNames:["A","B"],inputTypes:[x,x],output:{dims:S[0].dims,type:O,textureType:x},shaderSource:F,hasMain:!0}};n.add=(w,S)=>[w.run(_(w,S,h()),S)],n.and=(w,S)=>[w.run(_(w,S,i(),"bool"),S)],n.div=(w,S)=>[w.run(_(w,S,f()),S)],n.equal=(w,S)=>[w.run(_(w,S,t(),"bool"),S)],n.greater=(w,S)=>[w.run(_(w,S,e(),"bool"),S)],n.less=(w,S)=>[w.run(_(w,S,r(),"bool"),S)],n.mul=(w,S)=>[w.run(_(w,S,l()),S)],n.or=(w,S)=>[w.run(_(w,S,d(),"bool"),S)],n.pow=(w,S)=>[w.run(_(w,S,m()),S)],n.pRelu=(w,S)=>[w.run(_(w,S,b()),S)],n.sub=(w,S)=>[w.run(_(w,S,o()),S)],n.xor=(w,S)=>[w.run(_(w,S,g(),"bool"),S)]},4196:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseCastAttributes=n.cast=void 0;const u=a(2517);n.cast=(p,s,h)=>(c(s),[p.cast(s[0],h)]),n.parseCastAttributes=p=>u.ProtoUtil.tensorDataTypeFromProto(p.attributes.getInt("to"));const c=p=>{if(!p||p.length!==1)throw new Error("Cast requires 1 input.");if(p[0].type==="string")throw new Error("Invalid input type.")}},1163:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createPackedConcatProgramInfoLoader=void 0;const u=a(5060),c=a(2039),p=a(9390),s=a(2827);n.createPackedConcatProgramInfoLoader=(f,l,o)=>{const t=(e=l.length,r=o.cacheKey,{name:"Concat (packed)",inputNames:Array.from({length:e},(i,d)=>`X${d}`),inputTypes:Array(e).fill(c.TextureType.packed),cacheHint:r});var e,r;return Object.assign(Object.assign({},t),{get:()=>((i,d,g,m)=>{const b=g[0].dims.slice();if(m>=b.length||m<-1*b.length)throw new Error("axis specified for concat doesn't match input dimensionality");m<0&&(m=b.length+m);const _=b.slice(0);for(let X=1;X<g.length;X++){const J=g[X].dims.slice();for(let ee=0;ee<b.length;ee++)if(ee===m)_[m]+=J[ee];else if(b[ee]!==J[ee])throw new Error("non concat dimensions must match")}const v=_.length,w=(0,s.getChannels)("coords",v),S=(0,p.getCoordsDataType)(v),A=(0,s.unpackFromChannel)(),O=g.map(X=>X.dims),x=(0,p.getGlChannels)(v),I=new Array(O.length-1);I[0]=O[0][m];for(let X=1;X<I.length;X++)I[X]=I[X-1]+O[X][m];const N=x[m],B=x.slice(-2),L=x.join();let F=`if (${N} < ${I[0]}) { - return getChannel( - getX0(${L}), vec2(${B.join()})); - }`;for(let X=1;X<I.length;X++){const J=I[X-1];F+=` - if (${N} < ${I[X]} && ${N} >= ${I[X-1]}) { - return getChannel( - getX${X}(${h(x,N,J)}), - vec2(${h(B,N,J)})); - }`}const H=I.length,D=I[I.length-1];F+=` - return getChannel( - getX${H}(${h(x,N,D)}), - vec2(${h(B,N,D)}));`;const j=(0,u.getGlsl)(i.session.backend.glContext.version),Z=` - ${A} - float getValue(${x.map(X=>"int "+X)}) { - ${F} - } - - void main() { - ${S} coords = getOutputCoords(); - int lastDim = coords.${x[v-1]}; - coords.${x[v-1]} = coords.${x[v-2]}; - coords.${x[v-2]} = lastDim; - - vec4 result = vec4(getValue(${w}), 0., 0., 0.); - - ${w[v-1]} = ${w[v-1]} + 1; - if (${w[v-1]} < ${_[v-1]}) { - result.g = getValue(${w}); - } - - ${w[v-2]} = ${w[v-2]} + 1; - if (${w[v-2]} < ${_[v-2]}) { - result.a = getValue(${w}); - } - - ${w[v-1]} = ${w[v-1]} - 1; - if (${w[v-2]} < ${_[v-2]} && - ${w[v-1]} < ${_[v-1]}) { - result.b = getValue(${w}); - } - ${j.output} = result; - } - `;return Object.assign(Object.assign({},d),{output:{dims:_,type:g[0].type,textureType:c.TextureType.packed},shaderSource:Z,hasMain:!0})})(f,t,l,o.axis)})};const h=(f,l,o)=>{const t=f.indexOf(l);return f.map((e,r)=>r===t?`${e} - ${o}`:e).join()}},2069:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseConcatAttributes=n.concat=void 0;const u=a(246),c=a(2039),p=a(1163);n.concat=(e,r,i)=>(t(r),e.session.pack&&r[0].dims.length>1?[e.run((0,p.createPackedConcatProgramInfoLoader)(e,r,i),r)]:[e.run(s(e,r,i),r)]);const s=(e,r,i)=>{const d=(g=r.length,m=i.cacheKey,{name:"Concat",inputNames:Array.from({length:g},(b,_)=>`X${_}`),inputTypes:Array(g).fill(c.TextureType.unpacked),cacheHint:m});var g,m;return Object.assign(Object.assign({},d),{get:()=>((b,_,v,w)=>{const S=v[0].dims.slice();if(w>=S.length||w<-1*S.length)throw new Error("axis specified for concat doesn't match input dimensionality");w<0&&(w=S.length+w);const A=S.slice(0);for(let L=1;L<v.length;L++){const F=v[L].dims.slice();for(let H=0;H<S.length;H++)if(H===w)A[w]+=F[H];else if(S[H]!==F[H])throw new Error("non concat dimensions must match")}const O=A.length,x=new Array(v.length);let I=0;for(let L=0;L<x.length;++L)I+=v[L].dims[w],x[L]=I;let N="";N=v.length<5?h(x):f(x);const B=` - ${l(v.length,O)} - ${o(x)} - ${N} - float process(int indices[${O}]) { - int textureIndex = getTextureWhereDataResides (indices[${w}]); - - if(textureIndex != 0) { - indices[${w}] = indices[${w}] - int(getSizeInConcatAxisValueFromIndex(textureIndex-int(1))); - } - - return fetchDataFromCorrectTexture(textureIndex, indices); - }`;return Object.assign(Object.assign({},_),{output:{dims:A,type:v[0].type,textureType:c.TextureType.unpacked},shaderSource:B})})(0,d,r,i.axis)})},h=e=>`int getTextureWhereDataResides(int index) { - ${e.map((r,i)=>`if(index<${r}) {return ${i};} -`).join("")} - }`,f=e=>h(e),l=(e,r)=>{const i=[`float fetchDataFromCorrectTexture(int textureIndex, int indices[${r}]) {`];for(let d=0;d<e;++d)d===0?i.push(` if (textureIndex == ${d}) { return _X${d}(indices); }`):d===e-1?i.push(` else { return _X${d}(indices); }`):i.push(` else if (textureIndex == ${d}) { return _X${d}(indices); }`);return i.push(" }"),i.join(` -`)},o=e=>{const r=["int getSizeInConcatAxisValueFromIndex(int index) {"];for(let i=0;i<e.length;++i)i===0?r.push(` if (index == ${i}) { return ${e[i]}; }`):i===e.length-1?r.push(` else { return ${e[i]}; }`):r.push(` else if (index == ${i}) { return ${e[i]}; }`);return r.push(" }"),r.join(` -`)};n.parseConcatAttributes=e=>(0,u.createAttributeWithCacheKey)({axis:e.attributes.getInt("axis")});const t=e=>{if(!e||e.length<1)throw new Error("too few inputs");const r=e[0].type,i=e[0].dims.length;if(r==="string")throw new Error("string tensor is not supported yet");for(const d of e){if(d.type!==r)throw new Error("input tensors should be one type");if(d.dims.length!==i)throw new Error("input tensors should have the same shape")}}},4770:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createUnpackedGroupedConvProgramInfoLoader=void 0;const u=a(6231),c=a(5060),p=a(2039),s=a(8138),h=a(2823);n.createUnpackedGroupedConvProgramInfoLoader=(f,l,o)=>{const t=(e=l.length>2,r=o.cacheKey,{name:"GroupedConv",inputNames:e?["X","W","Bias"]:["X","W"],inputTypes:e?[p.TextureType.unpacked,p.TextureType.unpacked,p.TextureType.unpacked]:[p.TextureType.unpacked,p.TextureType.unpacked],cacheHint:r});var e,r;return Object.assign(Object.assign({},t),{get:()=>((i,d,g,m)=>{const b=d.length>2?"value += getBias(output_channel);":"",_=d[0].dims.slice(),v=d[1].dims.slice(),w=v[0]/m.group;u.Logger.verbose("GroupedConv",`autpPad:${m.autoPad}, dilations:${m.dilations}, group:${m.group}, kernelShape:${m.kernelShape}, pads:${m.pads}, strides:${m.strides}`);const S=(0,s.calculateOutputShape)(_,v,m.dilations,m.pads,m.strides),A=(0,c.getGlsl)(i.session.backend.glContext.version),{activationFunction:O,applyActivation:x}=(0,h.getActivationSnippet)(m),I=` - const ivec2 strides = ivec2(${m.strides[0]}, ${m.strides[1]}); - const ivec2 pads = ivec2(${m.pads[0]}, ${m.pads[1]}); - ${O} - void main() { - ivec4 coords = getOutputCoords(); - int batch = coords.x; - int output_channel = coords.y; - ivec2 xRCCorner = coords.zw * strides - pads; - int group_id = output_channel / ${w}; - - float value = 0.0; - for (int wInChannel = 0; wInChannel < ${v[1]}; wInChannel++) { - int input_channel = group_id * ${v[1]} + wInChannel; - for (int wHeight = 0; wHeight < ${v[2]}; wHeight++) { - int xHeight = xRCCorner.x + wHeight * ${m.dilations[0]}; - - if (xHeight < 0 || xHeight >= ${_[2]}) { - continue; - } - - for (int wWidth = 0; wWidth < ${v[3]}; wWidth++) { - int xWidth = xRCCorner.y + wWidth * ${m.dilations[1]}; - if (xWidth < 0 || xWidth >= ${_[3]}) { - continue; - } - - float xVal = getX(batch, input_channel, xWidth, xHeight); - float wVal = getW(output_channel, wInChannel, wWidth, wHeight); - value += xVal*wVal; - } - } - } - ${b} - ${x} - ${A.output} = vec4(value, .0, .0, .0); - } -`;return Object.assign(Object.assign({},g),{output:{dims:S,type:d[0].type,textureType:p.TextureType.unpacked},shaderSource:I,hasMain:!0})})(f,l,t,o)})}},1386:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.conv2DPacked=n.conv2DPackedPointwise=void 0;const u=a(8138),c=a(8555),p=a(708);n.conv2DPackedPointwise=(s,h,f)=>{const l=h[0].dims,o=h[1].dims,t=(0,u.calculateOutputShape)(l,o,f.dilations,f.pads,f.strides),e=s.reshapePacked(h[0],[l[1],l[2]*l[3]]),r=s.reshapePacked(h[1],[o[0],o[1]]),i=h.length>2?[r,e,h[2]]:[r,e],d=s.run((0,p.createPackedMatmulProgramInfoLoader)(s,i,f),i);return s.reshapePacked(d,t)},n.conv2DPacked=(s,h,f)=>{const l=h[0].dims,o=h[1].dims,t=(0,u.calculateOutputShape)(l,o,f.dilations,f.pads,f.strides),e=s.run((0,c.createPackedIm2ColProgramInfoLoader)(s,h[0],h[1],t,f),[h[0]]),r=s.reshapePacked(h[1],[o[0],o[1]*o[2]*o[3]]),i=h.length===3?[r,e,h[2]]:[r,e],d=s.run((0,p.createPackedMatmulProgramInfoLoader)(s,i,f),i);return s.reshapePacked(d,t)}},9663:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseConvTransposeAttributes=n.convTranspose=void 0;const u=a(246),c=a(5060),p=a(2039),s=a(2823),h=(r,i,d,g,m,b)=>(r-1)*i+d+(g-1)*m+1-b,f=(r,i,d,g,m)=>{const b=Math.floor(r/2);i==="SAME_UPPER"?(d[g]=b,d[m]=r-b):i==="SAME_LOWER"&&(d[g]=r-b,d[m]=b)};n.convTranspose=(r,i,d)=>(e(i,d),l(r,i,d));const l=(r,i,d)=>{const g=t(d,i);return[o(r,i,g)]},o=(r,i,d)=>r.run(((g,m,b)=>{const _=(v=m.length>2,w=b.cacheKey,{name:"ConvTranspose",inputNames:v?["X","W","B"]:["X","W"],inputTypes:v?[p.TextureType.unpacked,p.TextureType.unpacked,p.TextureType.unpacked]:[p.TextureType.unpacked,p.TextureType.unpacked],cacheHint:w});var v,w;return Object.assign(Object.assign({},_),{get:()=>((S,A,O,x)=>{const I=A.length>2?"getB(output_channel)":"0.0",N=A[0].dims,B=A[1].dims,L=B[1],F=B[0]/x.group,H=[A[0].dims[0],A[1].dims[1]*x.group,...x.outputShape],D=(0,c.getGlsl)(S.session.backend.glContext.version),{activationFunction:j,applyActivation:Z}=(0,s.getActivationSnippet)(x),X=` - const ivec2 strides = ivec2(${x.strides[0]}, ${x.strides[1]}); - const ivec2 pads = ivec2(${x.pads[0]}, ${x.pads[1]}); - ${j} - void main() { - ivec4 coords = getOutputCoords(); - int batch = coords.x; - int output_channel = coords.y; - - ivec2 loc = coords.zw + pads; - - int group_id = output_channel / ${L}; - int wOutChannel = output_channel - group_id * ${L}; - - float value = ${I}; - for (int inChannelOffset = 0; inChannelOffset < ${F}; inChannelOffset++) { - int input_channel = group_id * ${F} + inChannelOffset; - for (int wWOff = 0; wWOff < ${B[2]}; wWOff++) { - for (int wHOff = 0; wHOff < ${B[3]}; wHOff++) { - ivec2 wOff = ivec2(wWOff * ${x.dilations[0]}, wHOff * ${x.dilations[1]}); - ivec2 wLoc = loc - wOff; - ivec2 wLocIn = wLoc / strides; - if ( - wLocIn * strides == wLoc && - wLocIn.x >= 0 && wLocIn.x < ${N[2]} && - wLocIn.y >= 0 && wLocIn.y < ${N[3]} - ) { - float xVal = getX(batch, input_channel, wLocIn.y, wLocIn.x); - float wVal = getW(input_channel, wOutChannel, wHOff, wWOff); - value += xVal * wVal; - } - } - } - } - ${Z} - ${D.output} = vec4(value, .0, .0, .0); - } -`;return Object.assign(Object.assign({},O),{output:{dims:H,type:A[0].type,textureType:p.TextureType.unpacked},shaderSource:X,hasMain:!0})})(g,m,_,b)})})(r,i,d),i),t=(r,i)=>{const d=r.kernelShape.slice();if(r.kernelShape.length===0)for(let _=2;_<i[1].dims.length;++_)d.push(i[1].dims[_]);const g=r.pads.slice(),m=r.outputShape.slice();((_,v,w,S,A,O,x,I)=>{const N=_.length-2,B=I.length===0;for(let L=0;L<N;++L){const F=B?_[L+2]*O[L]:I[L],H=h(_[L+2],O[L],A[L],v[L],w[L],F);f(H,S,A,L,L+N),B&&I.push(O[L]*(_[L+2]-1)+x[L]+(v[L]-1)*w[L]+1-A[L]-A[L+N])}})(i[0].dims,d,r.dilations,r.autoPad,g,r.strides,r.outputPadding,m);const b=Object.assign({},r);return Object.assign(b,{kernelShape:d,pads:g,outputShape:m,cacheKey:r.cacheKey}),b};n.parseConvTransposeAttributes=r=>{const i=r.attributes,d=(0,s.parseInternalActivationAttributes)(i),g=i.getString("auto_pad","NOTSET"),m=i.getInts("dilations",[1,1]),b=i.getInt("group",1),_=i.getInts("kernel_shape",[]),v=i.getInts("output_padding",[0,0]),w=i.getInts("output_shape",[]),S=i.getInts("pads",[0,0,0,0]),A=i.getInts("strides",[1,1]);return(0,u.createAttributeWithCacheKey)(Object.assign({autoPad:g,dilations:m,group:b,kernelShape:_,outputPadding:v,outputShape:w,pads:S,strides:A},d))};const e=(r,i)=>{if(!r||r.length!==2&&r.length!==3)throw new Error("Conv requires 2 or 3 inputs");if(r[0].dims.length!==4||r[1].dims.length!==4)throw new Error("currently only support 2-dimensional conv");if(r[0].dims[1]!==r[1].dims[0])throw new Error("FILTER_IN_CHANNEL should be equal to DATA_CHANNEL");const d=r[1].dims[1]*i.group;if(r.length===3&&(r[2].dims.length!==1||r[2].dims[0]!==d))throw new Error("invalid bias");const g=r[0].dims.length-2;if(i.dilations.length!==g)throw new Error(`dilations should be ${g}D`);if(i.strides.length!==g)throw new Error(`strides should be ${g}D`);if(i.pads.length!==2*g)throw new Error(`pads should be ${2*g}D`);if(i.outputPadding.length!==g)throw new Error(`output_padding should be ${g}D`);if(i.kernelShape.length!==0&&i.kernelShape.length!==r[1].dims.length-2)throw new Error("invalid kernel shape");if(i.outputShape.length!==0&&i.outputShape.length!==r[0].dims.length-2)throw new Error("invalid output shape");if(r[0].type!=="float32"||r[1].type!=="float32")throw new Error("ConvTranspose input(X,W) should be float tensor");if(r.length===3&&r[2].type!=="float32")throw new Error("ConvTranspose input(bias) should be float tensor")}},8138:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseConvAttributes=n.conv=n.calculateOutputShape=void 0;const u=a(246),c=a(2517),p=a(4770),s=a(1386),h=a(9828),f=a(2823),l=a(3248),o=a(5623);n.calculateOutputShape=(g,m,b,_,v)=>{const w=g[0],S=g.slice(2),A=S.length,O=m[0],x=m.slice(2).map((N,B)=>N+(N-1)*(b[B]-1)),I=S.map((N,B)=>N+_[B]+_[B+A]).map((N,B)=>Math.floor((N-x[B]+v[B])/v[B]));return[w,O].concat(...I)},n.conv=(g,m,b)=>(d(m,b),t(g,m,b));const t=(g,m,b)=>{const _=i(b,m),v=g.session.pack,w=_.kernelShape[0]===1&&_.kernelShape[1]===1;return _.group>1?[g.run((0,p.createUnpackedGroupedConvProgramInfoLoader)(g,m,_),m)]:w&&v?[e(g,m,_)]:v&&m[0].dims.length===4&&m[0].dims[0]===1&&!w?[(0,s.conv2DPacked)(g,m,_)]:[r(g,m,_)]},e=(g,m,b)=>{const _=m[0].dims,v=m[1].dims,w=(0,n.calculateOutputShape)(_,v,b.dilations,b.pads,b.strides),S=g.reshapeUnpacked(m[0],[_[1],_[2]*_[3]]),A=g.reshapeUnpacked(m[1],[v[0],v[1]]),O=m.length>2?[A,S,m[2]]:[A,S],x=g.run((0,o.createMatmulProgramInfoLoader)(O,b),O);return g.reshapeUnpacked(x,w)},r=(g,m,b)=>{const _=m[0].dims,v=m[1].dims,w=(0,n.calculateOutputShape)(_,v,b.dilations,b.pads,b.strides),S=g.run((0,l.createIm2ColProgramInfoLoader)(g,m[0],m[1],w,b),[m[0]]),A=m.length===3?[S,m[1],m[2]]:[S,m[1]];return g.run((0,h.createDotProductProgramInfoLoader)(g,m,w,b),A)},i=(g,m)=>{const b=g.kernelShape.slice();if(g.kernelShape.length===0)for(let w=2;w<m[1].dims.length;++w)b.push(m[1].dims[w]);const _=g.pads.slice();c.PoolConvUtil.adjustPadsBasedOnAutoPad(m[0].dims,g.strides,g.dilations,b,_,g.autoPad);const v=Object.assign({},g);return Object.assign(v,{kernelShape:b,pads:_,cacheKey:g.cacheKey}),v};n.parseConvAttributes=g=>{const m=g.attributes,b=(0,f.parseInternalActivationAttributes)(m),_=m.getString("auto_pad","NOTSET"),v=m.getInts("dilations",[1,1]),w=m.getInt("group",1),S=m.getInts("kernel_shape",[]),A=m.getInts("pads",[0,0,0,0]),O=m.getInts("strides",[1,1]);return(0,u.createAttributeWithCacheKey)(Object.assign({autoPad:_,dilations:v,group:w,kernelShape:S,pads:A,strides:O},b))};const d=(g,m)=>{if(!g||g.length!==2&&g.length!==3)throw new Error("Conv requires 2 or 3 inputs");if(g[0].dims.length!==4||g[1].dims.length!==4)throw new Error("currently only support 2-dimensional conv");if(g[0].dims[1]!==g[1].dims[1]*m.group)throw new Error("FILTER_IN_CHANNEL should be equal to DATA_CHANNEL");if(g.length===3&&(g[2].dims.length!==1||g[1].dims[0]!==g[2].dims[0]))throw new Error("invalid bias");const b=g[0].dims.length-2;if(m.dilations.length!==b)throw new Error(`dilations should be ${b}D`);if(m.strides.length!==b)throw new Error(`strides should be ${b}D`);if(m.pads.length!==2*b)throw new Error(`pads should be ${2*b}D`);if(m.kernelShape.length!==0&&m.kernelShape.length!==g[1].dims.length-2)throw new Error("invalid kernel shape");if(g[0].type!=="float32"||g[1].type!=="float32")throw new Error("Conv input(X,W) should be float tensor");if(g.length===3&&g[2].type!=="float32")throw new Error("Conv input(bias) should be float tensor")}},5193:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseDepthToSpaceAttributes=n.depthToSpace=void 0;const u=a(3738);n.depthToSpace=(p,s,h)=>{c(s);const f=h.blocksize,l=f*f,o=h.mode==="DCR"?[0,3,4,1,5,2]:[0,1,4,2,5,3],t=h.mode==="DCR"?[s[0].dims[0],f,f,s[0].dims[1]/l,s[0].dims[2],s[0].dims[3]]:[s[0].dims[0],s[0].dims[1]/l,f,f,s[0].dims[2],s[0].dims[3]],e=p.reshapeUnpacked(s[0],t),r={perm:o,cacheKey:`${o}`},[i]=(0,u.transpose)(p,[e],r),d=[s[0].dims[0],s[0].dims[1]/l,s[0].dims[2]*f,s[0].dims[3]*f];return[p.reshapeUnpacked(i,d)]},n.parseDepthToSpaceAttributes=p=>{const s=p.attributes.getInt("blocksize");if(s<1)throw new Error(`blocksize must be >= 1, but got : ${s} for DepthToSpace`);const h=p.attributes.getString("mode","DCR");if(h!=="DCR"&&h!=="CRD")throw new Error(`unrecognized mode: ${h} for DepthToSpace`);return{mode:h,blocksize:s}};const c=p=>{if(p.length!==1)throw new Error(`DepthToSpace expect 1 inputs, but got ${p.length}`);if(p[0].type==="string"||p[0].dims.length!==4)throw new TypeError("DepthToSpace input should be a 4-D numeric tensor")}},9828:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createDotProductProgramInfoLoader=void 0;const u=a(2517),c=a(5060),p=a(2039),s=a(2823),h=a(3248);n.createDotProductProgramInfoLoader=(f,l,o,t)=>{const e=((r,i)=>({name:"ConvDotProduct",inputNames:r?["Im2Col","K","B"]:["Im2Col","K"],inputTypes:r?[p.TextureType.unpacked,p.TextureType.packedLastDimension,p.TextureType.unpacked]:[p.TextureType.unpacked,p.TextureType.packedLastDimension],cacheKey:i.activationCacheKey}))(l.length>2,t);return Object.assign(Object.assign({},e),{get:()=>((r,i,d,g,m)=>{const b=d[0].dims,_=d[1].dims,v=[_[0],Math.ceil(b[1]*_[2]*_[3]/4)],w=(0,h.calculateIm2ColDims)(b,_,g),[S,A]=r.calculateTextureWidthAndHeight(v,p.TextureType.packedLastDimension),O=u.ShapeUtil.computeStrides(w),[x,I]=r.calculateTextureWidthAndHeight(w,p.TextureType.packedLastDimension),N=g.length,B=d.length<3?"0.0":"_B(b)",L=Math.ceil(b[1]*_[2]*_[3]/4),{activationFunction:F,applyActivation:H}=(0,s.getActivationSnippet)(m),D=(0,c.getGlsl)(r.session.backend.glContext.version),j=` -${F} -float process(int indices[${N}]) { - int b[1]; - b[0] = indices[1]; - int im2col[4]; - im2col[0] = indices[0]; - im2col[1] = indices[2]; - im2col[2] = indices[3]; - int im2colOffset = im2col[0] * ${O[0]} + im2col[1] * ${O[1]} + im2col[2] * ${O[2]}; - int kernelOffset = indices[1] * ${v[1]}; - float value = ${B}; - for (int i = 0; i < ${L}; ++i) { - vec2 im2colCoords = offsetToCoords(im2colOffset, ${x}, ${I}); - vec2 kernelCoords = offsetToCoords(kernelOffset, ${S}, ${A}); - value += dot(${D.texture2D}(Im2Col, im2colCoords), ${D.texture2D}(K, kernelCoords)); - ++im2colOffset; - ++kernelOffset; - } - ${H} - return value; -}`;return Object.assign(Object.assign({},i),{output:{dims:g,type:d[0].type,textureType:p.TextureType.unpacked},shaderSource:j})})(f,e,l,o,t)})}},7992:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseFlattenAttributes=n.flatten=void 0;const u=a(2517);n.flatten=(p,s,h)=>{c(s,h);const f=u.ShapeUtil.flattenShape(s[0].dims,h);return[p.reshapeUnpacked(s[0],f)]},n.parseFlattenAttributes=p=>p.attributes.getInt("axis",1);const c=(p,s)=>{if(!p||p.length!==1)throw new Error("Flatten requires 1 input.");const h=p[0].dims.length;if(h===0)throw new Error("scalar tensor is not supported.");if(s<-h||s>h)throw new Error("Invalid axis");if(p[0].type==="string")throw new Error("string tensor is not supported.")}},2823:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseInternalActivationAttributes=n.getActivationSnippet=void 0;const u=a(2517),c=a(4909);n.getActivationSnippet=function(p){let s;switch(p.activation){case"Relu":s=(0,c.glslRelu)();break;case"Sigmoid":s=(0,c.glslSigmoid)();break;case"Clip":s=(0,c.glslClip)(p.clipMin,p.clipMax);break;default:return{activationFunction:"",applyActivation:""}}const h=s.name;return{activationFunction:s.body,applyActivation:`value = ${h}_(value);`}},n.parseInternalActivationAttributes=p=>{const s=p.getString("activation","");if(s==="Clip"){const[h,f]=p.getFloats("activation_params",[u.MIN_CLIP,u.MAX_CLIP]);return{activation:s,clipMax:f,clipMin:h,activationCacheKey:`${s}:${h},${f}`}}return{activation:s,activationCacheKey:s}}},1253:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseGatherAttributes=n.gather=void 0;const u=a(246),c=a(782),p=a(2517),s=a(2039);n.gather=(o,t,e)=>(l(t,e.axis),[o.run(f(o,t,e),t)]),n.parseGatherAttributes=o=>(0,u.createAttributeWithCacheKey)({axis:o.attributes.getInt("axis",0)});const h={name:"Gather",inputNames:["A","B"],inputTypes:[s.TextureType.unpacked,s.TextureType.unpacked]},f=(o,t,e)=>{const r=Object.assign(Object.assign({},h),{cacheHint:e.cacheKey});return Object.assign(Object.assign({},r),{get:()=>((i,d,g,m)=>{const b=g[0].dims.slice(),_=g[1].dims.slice(),v=new Array(b.length+_.length-1);m=p.ShapeUtil.normalizeAxis(m,b.length);const w=[];for(let A=0;A<v.length;A++)A<m?(v[A]=b[A],w.push(`inputIdx[${A}] = outputIdx[${A}];`)):A<m+_.length?(v[A]=_[A-m],w.push(`indexDataIdx[${A-m}] = outputIdx[${A}];`)):(v[A]=b[A-_.length+1],w.push(`inputIdx[${A-_.length+1}] = outputIdx[${A}];`));const S=` - float process(int outputIdx[${v.length||1}]) { - int inputIdx[${b.length}]; - int indexDataIdx[${_.length||1}]; - indexDataIdx[0] = 0; - ${w.join(` - `)} - int idx = int(_B(indexDataIdx)); - inputIdx[${m}] = idx < 0 ? idx + ${b[m]} : idx; - return _A(inputIdx); - }`;return Object.assign(Object.assign({},d),{output:{dims:v,type:g[0].type,textureType:s.TextureType.unpacked},shaderSource:S})})(0,r,t,e.axis)})},l=(o,t)=>{if(!o||o.length!==2)throw new Error("Gather requires 2 inputs.");const e=o[0].dims.length;if(e<1)throw new Error("Invalid input shape.");if(t<-e||t>e-1)throw new Error("Invalid axis.");if(c.NUMBER_TYPES.indexOf(o[0].type)===-1)throw new Error("Invaid input type.");if(o[1].type!=="int32"&&o[1].type!=="int16")throw new Error("Invaid input type.")}},4776:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseGemmAttributesV11=n.parseGemmAttributesV7=n.gemm=void 0;const u=a(246),c=a(2517),p=a(2039);n.gemm=(o,t,e)=>(l(t,e),[o.run(h(t,e),t)]);const s=(o,t)=>{const e=o.attributes.getInt("transA",0)!==0,r=o.attributes.getInt("transB",0)!==0,i=o.attributes.getFloat("alpha",1),d=o.attributes.getFloat("beta",1);return(0,u.createAttributeWithCacheKey)({transA:e,transB:r,alpha:i,beta:d,isOptionalC:t})};n.parseGemmAttributesV7=o=>s(o,!1),n.parseGemmAttributesV11=o=>s(o,!0);const h=(o,t)=>{const e={name:"Gemm",inputNames:o.length===3?["A","B","C"]:["A","B"],inputTypes:o.length===3?[p.TextureType.unpacked,p.TextureType.unpacked,p.TextureType.unpacked]:[p.TextureType.unpacked,p.TextureType.unpacked],key:t.cacheKey};return Object.assign(Object.assign({},e),{get:()=>f(e,o,t)})},f=(o,t,e)=>{const r=t[0].dims.slice(),i=t[1].dims.slice(),[d,g]=c.GemmUtil.getShapeOfGemmResult(r,e.transA,i,e.transB,t.length===3?t[2].dims:void 0),m=[d,g];if(!m)throw new Error("Can't use gemm on the given tensors");let b=r[r.length-1],_="";e.transA&&(b=r[0]),e.transA&&e.transB?_="value += _A_T(a) * _B_T(b);":e.transA&&!e.transB?_="value += _A_T(a) * _B(b);":!e.transA&&e.transB?_="value += _A(a) * _B_T(b);":e.transA||e.transB||(_="value += _A(a) * _B(b);");const v=m.length,w=` - float process(int indices[${v}]) { - int a[${v}]; - int b[${v}]; - ${t.length===3?`int c[${t[2].dims.length}];`:""} - - copyVec(indices, a); - copyVec(indices, b); - ${t.length===3?"bcastIndices_C(indices, c);":""} - - float value = 0.0; - for (int k=0; k<${b}; ++k) { - a[${v-1}] = k; - b[${v-2}] = k; - ${_} - } - - value = value * alpha; - ${t.length===3?"value += beta * _C(c);":""} - return value; - }`;return Object.assign(Object.assign({},o),{output:{dims:m,type:t[0].type,textureType:p.TextureType.unpacked},variables:[{name:"alpha",type:"float",data:e.alpha},{name:"beta",type:"float",data:e.beta}],shaderSource:w})},l=(o,t)=>{if(!o)throw new Error("Input is missing");if(t.isOptionalC&&(o.length<2||o.length>3))throw new Error("Invaid input shape.");if(!t.isOptionalC&&o.length!==3)throw new Error("Gemm requires 3 inputs");if(o.length===3&&o[2].dims.length!==1&&o[2].dims.length!==2)throw new Error("Invalid input shape of C");if(o[0].type!=="float32"&&o[0].type!=="float64"||o[1].type!=="float32"&&o[1].type!=="float64"||o.length===3&&o[2].type!=="float32"&&o[2].type!=="float64")throw new Error("Invalid input type.");if(o[0].type!==o[1].type||o.length===3&&o[0].type!==o[2].type)throw new Error("Input types are mismatched")}},8555:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createPackedIm2ColProgramInfoLoader=void 0;const u=a(5060),c=a(2039),p=a(2827);n.createPackedIm2ColProgramInfoLoader=(s,h,f,l,o)=>{const t=(e=o.cacheKey,{name:"Im2Col (packed)",inputNames:["A"],inputTypes:[c.TextureType.packed],cacheHint:e});var e;return Object.assign(Object.assign({},t),{get:()=>((r,i,d,g,m,b)=>{const _=d.dims,v=g.dims,w=m.length,S=[v[1]*v[2]*v[3],m[2]*m[3]],A=v[2]*v[3],O=(0,p.unpackFromChannel)(),x=(0,u.getGlsl)(r.session.backend.glContext.version);let I="";for(let B=0;B<=1;B++)for(let L=0;L<=1;L++)I+=` - blockIndex = rc.x + ${L}; - pos = rc.y + ${B}; - - if(blockIndex < ${S[1]} && pos < ${S[0]}) { - offsetY = int(blockIndex / (${m[w-1]})) * ${b.strides[0]} - - ${b.pads[0]}; - d0 = offsetY + ${b.dilations[0]} * (imod(pos, ${A}) / ${v[2]}); - - if(d0 < ${_[2]} && d0 >= 0) { - offsetX = imod(blockIndex, ${m[w-1]}) * ${b.strides[1]} - - ${b.pads[1]}; - d1 = offsetX + ${b.dilations[1]} * imod(imod(pos, ${A}), ${v[2]}); - - if(d1 < ${_[3]} && d1 >= 0) { - - ch = int(float(pos)/ ${A}.); - innerDims = vec2(d0, d1); - result[${2*B+L}] = getChannel( - getA(0, ch, int(innerDims.x), - int(innerDims.y)), innerDims); - } - } - } - - `;const N=` - ${O} - - void main() { - ivec2 rc = getOutputCoords(); - vec4 result = vec4(0.0); - int blockIndex, pos, offsetY, d0, offsetX, d1, ch; - vec2 innerDims; - ${I} - ${x.output} = result; - } - `;return Object.assign(Object.assign({},i),{output:{dims:S,type:d.type,textureType:c.TextureType.packed},shaderSource:N,hasMain:!0})})(s,t,h,f,l,o)})}},3248:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.calculateIm2ColDims=n.createIm2ColProgramInfoLoader=void 0;const u=a(2039);n.createIm2ColProgramInfoLoader=(c,p,s,h,f)=>{const l=(o=f.cacheKey,{name:"Im2Col",inputNames:["X"],inputTypes:[u.TextureType.unpacked],cacheHint:o});var o;return Object.assign(Object.assign({},l),{get:()=>((t,e,r,i,d,g)=>{const m=r.dims,b=i.dims,_=d.length,v=(0,n.calculateIm2ColDims)(m,b,d,4),w=` - const int XC = ${m[1]}; - const int XH = ${m[2]}; - const int XW = ${m[3]}; - const int KH = ${g.kernelShape[0]}; - const int KW = ${g.kernelShape[1]}; - const int dilationH = ${g.dilations[0]}; - const int dilationW = ${g.dilations[1]}; - const int strideH = ${g.strides[0]}; - const int strideW = ${g.strides[1]}; - const int padH = ${g.pads[0]}; - const int padW = ${g.pads[1]}; - const int KHKW = KH*KW; - const int XCKHKW = XC * KHKW; - const int outputChannels = 4; - vec4 process(int indices[${_}]) { - int b = indices[0]; // batch size - int oh = indices[1] * strideH - padH; //output height - int ow = indices[2] * strideW - padW; //output width - int p = indices[3] * outputChannels; //patch - vec4 value = vec4(0.0); - for(int i=0; i < outputChannels; ++i) { - if(p < XCKHKW) { - int patchC = p / KHKW; - int patchH = (p - patchC*KHKW) / KW; - int patchW = (p - patchC*KHKW) - patchH * KW; - int xh2 = oh + patchH * dilationH; - int xw2 = ow + patchW * dilationW; - int x[${m.length}]; - x[0] = b; - x[1] = patchC; - x[2] = xh2; - x[3] = xw2; - if(xh2 >= 0 && - xh2 < XH && - xw2 >= 0 && - xw2 < XW) { - value[i] = _X(x); - } - } - ++p; - } - return value; - } - `;return Object.assign(Object.assign({},e),{output:{dims:v,type:r.type,textureType:u.TextureType.packedLastDimension},shaderSource:w})})(0,l,p,s,h,f)})},n.calculateIm2ColDims=(c,p,s,h=4)=>[s[0],s[2],s[3],Math.ceil(c[1]*p[2]*p[3]/h)]},6572:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseImageScalerAttributes=n.imageScaler=void 0;const u=a(246),c=a(2039);n.imageScaler=(l,o,t)=>(f(o),[l.run(s(l,o,t),o)]),n.parseImageScalerAttributes=l=>{const o=l.attributes.getFloat("scale"),t=l.attributes.getFloats("bias");return(0,u.createAttributeWithCacheKey)({scale:o,bias:t})};const p={name:"ImageScaler",inputNames:["X"],inputTypes:[c.TextureType.unpacked]},s=(l,o,t)=>{const e=Object.assign(Object.assign({},p),{cacheHint:t.cacheKey});return Object.assign(Object.assign({},e),{get:()=>((r,i,d,g)=>{const m=d[0].dims.slice(),b=m.length,_=` - ${h(g.bias.length)} - float process(int indices[${b}]) { - return _X(indices) * scale + getBias(bias, indices[1]); - }`;return Object.assign(Object.assign({},i),{output:{dims:m,type:d[0].type,textureType:c.TextureType.unpacked},variables:[{name:"bias",type:"float",arrayLength:g.bias.length,data:g.bias},{name:"scale",type:"float",data:g.scale}],shaderSource:_})})(0,e,o,t)})},h=l=>{const o=[`float getBias(float bias[${l}], int channel) {`];for(let t=0;t<l;++t)t===0?o.push(` if (channel == ${t}) { return bias[${t}]; }`):t===l-1?o.push(` else { return bias[${t}]; }`):o.push(` else if (channel == ${t}) { return bias[${t}]; }`);return o.push(" }"),o.join(` -`)},f=l=>{if(!l||l.length!==1)throw new Error("ImageScaler requires 1 input.");if(l[0].dims.length!==4)throw new Error("Invalid input shape.");if(l[0].type!=="float32"&&l[0].type!=="float64")throw new Error("Invalid input type.")}},3346:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseInstanceNormalizationAttributes=n.instanceNormalization=void 0;const u=a(5060),c=a(2039);n.instanceNormalization=(o,t,e)=>{l(t);const r=o.run(s(t[0]),t);return[o.run(f(o,t[0],e,r.dims),[t[0],r,t[1],t[2]])]},n.parseInstanceNormalizationAttributes=o=>o.attributes.getFloat("epsilon",1e-5);const p={name:"InstanceNormalization_MeanAndVariance",inputNames:["X"],inputTypes:[c.TextureType.unpacked]},s=o=>Object.assign(Object.assign({},p),{get:()=>((t,e)=>{const r=e.dims.slice(),i=r[1],d=r[2]*r[3],g=[r[0],i],m=` - vec4 process(int[2] indices) { - vec4 v = vec4(0.0); - int a[4]; - a[0] = indices[0]; - a[1] = indices[1]; - float temp = 0.0; - for(int a2=0; a2<${r[2]}; a2++) { - a[2] = a2; - for(int a3=0; a3<${r[3]}; a3++) { - a[3] = a3; - float x = _X(a); - temp += x; - } - } - float mean = temp / float(${d}); - temp = 0.0; - for(int a2=0; a2<${r[2]}; a2++) { - a[2] = a2; - for(int a3=0; a3<${r[3]}; a3++) { - a[3] = a3; - float x = _X(a); - temp += (x - mean) * (x - mean); - } - } - v.r = mean; - v.g = temp / float(${d}); - - return v; - }`;return Object.assign(Object.assign({},t),{output:{dims:g,type:e.type,textureType:c.TextureType.packedLastDimension},shaderSource:m})})(p,o)}),h={name:"InstanceNormalization_ComputeOutput",inputNames:["X","MeanAndVariance","Scale","B"],inputTypes:[c.TextureType.unpacked,c.TextureType.packedLastDimension,c.TextureType.unpacked,c.TextureType.unpacked]},f=(o,t,e,r)=>{const i=Object.assign(Object.assign({},h),{cacheHint:`${e}`});return Object.assign(Object.assign({},i),{get:()=>((d,g,m,b,_)=>{const v=(0,u.getGlsl)(d.session.backend.glContext.version),[w,S]=d.calculateTextureWidthAndHeight(_,c.TextureType.packedLastDimension),[A,O]=[w/4,S],x=` - vec4 get_MeanAndVariance(int[2] mv) { - int offset = indicesToOffset_MeanAndVariance(mv); - vec2 coords = offsetToCoords(offset, ${A}, ${O}); - return ${v.texture2D}(MeanAndVariance, coords); - } - - float process(int[4] indices) { - int mv[2]; - mv[0] = indices[0]; - mv[1] = indices[1]; - vec4 mean_and_variance = get_MeanAndVariance(mv); - float mean = mean_and_variance.r; - float variance = mean_and_variance.g; - - int sb[1]; - sb[0] = indices[1]; - float scale = _Scale(sb); - float b = _B(sb); - - return scale * (_X(indices) - mean) / sqrt(variance + epsilon) + b; - }`;return Object.assign(Object.assign({},g),{output:{dims:m.dims,type:m.type,textureType:c.TextureType.unpacked},variables:[{name:"epsilon",type:"float",data:b}],shaderSource:x})})(o,i,t,e,r)})},l=o=>{if(!o||o.length!==3)throw new Error("InstanceNormalization requires 3 inputs.");const t=o[0],e=o[1],r=o[2];if(t.dims.length<3||e.dims.length!==1||r.dims.length!==1)throw new Error("Invalid input shape.");if(e.dims[0]!==t.dims[1]||r.dims[0]!==t.dims[1])throw new Error("Input shapes are mismatched.");if(t.type!=="float32"&&t.type!=="float64"||e.type!=="float32"&&e.type!=="float64"||r.type!=="float32"&&r.type!=="float64")throw new Error("Invalid input type.");if(o[0].dims.length!==4)throw new Error("Only support 4-D input shape.")}},708:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createPackedMatmulProgramInfoLoader=void 0;const u=a(2517),c=a(5060),p=a(2039),s=a(9390),h=a(2823),f=a(5623);n.createPackedMatmulProgramInfoLoader=(l,o,t)=>{const e=(r=o.length>2,i=t.activationCacheKey,{name:"MatMul (packed)",inputNames:r?["A","B","Bias"]:["A","B"],inputTypes:r?[p.TextureType.packed,p.TextureType.packed,p.TextureType.packed]:[p.TextureType.packed,p.TextureType.packed],cacheHint:i});var r,i;return Object.assign(Object.assign({},e),{get:()=>((d,g,m,b)=>{const _=m.length>2,v=_?"value += getBiasForMatmul();":"",w=m[0].dims,S=m[1].dims,A=u.BroadcastUtil.calcShape(w,S,!0),O=!u.ShapeUtil.areEqual(m[0].dims,m[1].dims);if(!A)throw new Error("Can't use matmul on the given tensors");const x=w[w.length-1],I=Math.ceil(x/2),N=w.length,B=S.length,L=(0,c.getGlsl)(d.session.backend.glContext.version),F=(0,s.getCoordsDataType)(A.length),H=A.length,D=(0,s.getGlChannels)(),{activationFunction:j,applyActivation:Z}=(0,h.getActivationSnippet)(b),X=_?`${(0,f.getBiasForMatmul)(F,D,m[2].dims,A,!0)}`:"",J=O?`${function(ve,oe,_e,be){let ke=[],Fe=[];const xe=_e[0].dims,Ne=_e[1].dims,Ce=xe.length,Ee=Ne.length,Oe=be.length,Be=Oe-Ce,Ge=Oe-Ee;ke=xe.map((Ie,je)=>`coords.${oe[je+Be]}`),ke[Ce-1]="i*2",ke.join(", "),Fe=Ne.map((Ie,je)=>`coords.${oe[je+Ge]}`),Fe[Ee-2]="i*2",Fe.join(", ");const Ve=u.BroadcastUtil.getBroadcastDims(xe,be),Xe=u.BroadcastUtil.getBroadcastDims(Ne,be),Ze=Ve.map(Ie=>`coords.${oe[Ie+Be]} = 0;`).join(` -`),qe=Xe.map(Ie=>`coords.${oe[Ie+Ge]} = 0;`).join(` -`),Ue=`int lastDim = coords.${oe[Oe-1]}; - coords.${oe[Oe-1]} = coords.${oe[Oe-2]}; - coords.${oe[Oe-2]} = lastDim;`;return` -vec4 getAAtOutCoordsMatmul(int i) { - ${ve} coords = getOutputCoords(); - ${Ue} - ${Ze} - vec4 outputValue = getA(${ke}); - return outputValue; -} - -vec4 getBAtOutCoordsMatmul(int i) { - ${ve} coords = getOutputCoords(); - ${Ue} - ${qe} - vec4 outputValue = getB(${Fe}); - return outputValue; -}`}(F,D,m,A)}`:"",ee=O?"getAAtOutCoordsMatmul(i)":`getA(${function(ve,oe){let _e="";for(let be=0;be<oe-2;be++)_e+=`rc.${ve[be]}, `;return _e+=`rc.${ve[oe-2]}, i*2`,_e}(D,N)})`,ue=O?"getBAtOutCoordsMatmul(i)":`getB(${function(ve,oe){let _e="";for(let be=0;be<oe-2;be++)_e+=`rc.${ve[be]}, `;return _e+=`i*2, rc.${ve[oe-1]}`,_e}(D,B)})`,Ae=` - ${J} - ${X} - ${j} - void main() { - ${O?"":`${F} rc = - getOutputCoords(); int lastDim = rc.${D[H-1]}; rc.${D[H-1]} = - rc.${D[H-2]}; rc.${D[H-2]} = lastDim; - `} - - vec4 value = vec4(0); - for (int i = 0; i < ${I}; i++) { - vec4 a = ${ee}; - vec4 b = ${ue}; - - value += (a.rrbb * b.rgrg); - value += (a.ggaa * b.baba); - } - ${v} - ${Z} - ${L.output} = value; - }`;return Object.assign(Object.assign({},g),{output:{dims:A,type:m[0].type,textureType:p.TextureType.packed},shaderSource:Ae,hasMain:!0})})(l,e,o,t)})}},5623:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.getBiasForMatmul=n.createMatmulProgramInfoLoader=n.parseMatMulAttributes=n.matMul=void 0;const u=a(2517),c=a(2039),p=a(9390),s=a(2823),h=a(708);function f(t,e){const r=(i=t.length>2,d=e.activationCacheKey,{name:"MatMul",inputNames:i?["A","B","Bias"]:["A","B"],inputTypes:i?[c.TextureType.unpacked,c.TextureType.unpacked,c.TextureType.unpacked]:[c.TextureType.unpacked,c.TextureType.unpacked],cacheHint:d});var i,d;return Object.assign(Object.assign({},r),{get:()=>function(g,m,b){const _=m[0].dims,v=m[1].dims,w=u.BroadcastUtil.calcShape(_,v,!0);if(!w)throw new Error("Can't use matmul on the given tensors");const S=(0,p.getCoordsDataType)(w.length),A=(0,p.getGlChannels)(),{activationFunction:O,applyActivation:x}=(0,s.getActivationSnippet)(b),I=m.length>2,N=I?"value += getBiasForMatmul();":"",B=I?`${o(S,A,m[2].dims,w,!1)}`:"",L=w.length,F=_.length,H=v.length,D=` - ${O} - ${B} - float process(int indices[${L}]) { - int a[${F}]; - int b[${H}]; - bcastMatmulIndices_A(indices, a); - bcastMatmulIndices_B(indices, b); - - float value; - for (int k=0; k<${_[_.length-1]}; ++k) { - a[${F-1}] = k; - b[${H-2}] = k; - value += _A(a) * _B(b); - } - ${N} - ${x} - return value; - }`;return Object.assign(Object.assign({},g),{output:{dims:w,type:m[0].type,textureType:c.TextureType.unpacked},shaderSource:D})}(r,t,e)})}n.matMul=(t,e,r)=>(l(e),t.session.pack?[t.run((0,h.createPackedMatmulProgramInfoLoader)(t,e,r),e)]:[t.run(f(e,r),e)]),n.parseMatMulAttributes=t=>(0,s.parseInternalActivationAttributes)(t.attributes),n.createMatmulProgramInfoLoader=f;const l=t=>{if(!t||t.length!==2)throw new Error("MatMul requires 2 inputs.");if(t[0].dims[t[0].dims.length-1]!==t[1].dims[t[1].dims.length-2])throw new Error("shared dimension does not match.");if(t[0].type!=="float32"&&t[0].type!=="float64"||t[1].type!=="float32"&&t[1].type!=="float64")throw new Error("inputs should be float type");if(t[0].type!==t[1].type)throw new Error("inputs types should match")};function o(t,e,r,i,d){let g="";const m=r.length,b=i.length,_=b-m;g=b<2&&m>0?"coords":r.map((S,A)=>`coords.${e[A+_]}`).join(", ");const v=u.BroadcastUtil.getBroadcastDims(r,i).map(S=>`coords.${e[S+_]} = 0;`).join(` -`);let w="vec4(outputValue.xx, outputValue.yy)";return u.ShapeUtil.size(r)===1&&(w="vec4(outputValue.x)"),d?` -vec4 getBiasForMatmul() { - ${t} coords = getOutputCoords(); - ${v} - vec4 outputValue = getBias(${g}); - return ${w}; -}`:` -float getBiasForMatmul() { - ${t} coords = getOutputCoords(); - ${v} - return getBias(coords.x); -}`}n.getBiasForMatmul=o},2403:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createPackProgramInfoLoader=void 0;const u=a(5060),c=a(2039),p=a(9390),s=a(2827),h={name:"pack",inputNames:["A"],inputTypes:[c.TextureType.unpackedReversed]};n.createPackProgramInfoLoader=(f,l)=>Object.assign(Object.assign({},h),{get:()=>((o,t)=>{const e=(0,u.getGlsl)(o.session.backend.glContext.version),r=t.dims,i=r.length,d=t.dims.length,g=(0,p.getCoordsDataType)(d),m=(0,s.getChannels)("rc",d),b=(_=d,v=m,w=r[r.length-2],S=r[r.length-1],_===0||_===1?"":` - int r = ${v[_-2]}; - int c = ${v[_-1]}; - int rp1 = ${v[_-2]} + 1; - int cp1 = ${v[_-1]} + 1; - bool rEdge = rp1 >= ${S}; - bool cEdge = cp1 >= ${w}; - `);var _,v,w,S;let A;A=i===0?[1,1]:i===1?[r[0],1]:[r[d-1],r[d-2]];const O=function(N,B,L){if(N===0)return"false";if(N===1)return`rc > ${B[0]}`;let F="";for(let H=N-2;H<N;H++)F+=`${L[H]} >= ${B[H-N+2]}`,H<N-1&&(F+="||");return F}(d,A,m),x=function(N,B){const L=N.length;if(L===0)return"getA(), 0, 0, 0";if(L===1)return`getA(rc), - rc + 1 >= ${N[0]} ? 0. : getA(rc + 1), - 0, 0`;let F="";if(L>2)for(let H=0;H<L-2;++H)F+=`${B[H]},`;return`getA(${F}r, c), - rEdge ? 0. : getA(${F}rp1, c), - cEdge ? 0. : getA(${F}r, cp1), - rEdge || cEdge ? 0. : getA(${F}rp1, cp1)`}(r,m),I=` - void main() { - ${g} rc = getOutputCoords(); - - if(${O}) { - ${e.output} = vec4(0); - } else { - ${b} - - ${e.output} = vec4(${x}); - } - } - `;return Object.assign(Object.assign({},h),{hasMain:!0,output:{dims:t.dims,type:t.type,textureType:c.TextureType.packed},shaderSource:I})})(f,l)})},2827:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.unpackFromChannel=n.getChannels=n.getVecChannels=void 0;const u=a(9390);function c(p,s){return(0,u.getGlChannels)(s).map(h=>`${p}.${h}`)}n.getVecChannels=c,n.getChannels=function(p,s){return s===1?[p]:c(p,s)},n.unpackFromChannel=function(){return` - float getChannel(vec4 frag, int dim) { - int modCoord = imod(dim, 2); - return modCoord == 0 ? frag.r : frag.g; - } - - float getChannel(vec4 frag, vec2 innerDims) { - vec2 modCoord = mod(innerDims, 2.); - return modCoord.x == 0. ? - (modCoord.y == 0. ? frag.r : frag.g) : - (modCoord.y == 0. ? frag.b : frag.a); - } - `}},2870:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parsePadAttributesV11=n.padV11=n.parsePadAttributesV2=n.padV2=void 0;const u=a(246),c=a(2517),p=a(5060),s=a(2039),h={name:"Pad",inputNames:["A"],inputTypes:[s.TextureType.unpacked]};n.padV2=(g,m,b)=>(o(m),[g.run(Object.assign(Object.assign({},h),{cacheHint:b.cacheKey,get:()=>l(g,m[0],b)}),m)]),n.parsePadAttributesV2=g=>{const m=g.attributes.getString("mode","constant"),b=g.attributes.getFloat("value",0),_=g.attributes.getInts("pads");return(0,u.createAttributeWithCacheKey)({mode:m,value:b,pads:_})},n.padV11=(g,m,b)=>{t(m);const _=f(g,m,b);return(0,n.padV2)(g,[m[0]],_)},n.parsePadAttributesV11=g=>g.attributes.getString("mode","constant");const f=(g,m,b)=>{if(!g.session.isInitializer(m[1].dataId)||m.length>=3&&!g.session.isInitializer(m[2].dataId))throw new Error("dynamic pad attributes are not allowed");const _=Array.from(m[1].integerData),v=m.length>=3?m[2].floatData[0]:0;return(0,u.createAttributeWithCacheKey)({mode:b,pads:_,value:v})},l=(g,m,b)=>{const _=c.ShapeUtil.padShape(m.dims.slice(),b.pads),v=_.length,w=` - ${e(g,m,b)} - float process(int[${v}] indices) { - return padA(indices); - }`;return{name:"Pad",inputNames:["A"],inputTypes:[s.TextureType.unpacked],output:{dims:_,type:m.type,textureType:s.TextureType.unpacked},shaderSource:w}},o=g=>{if(!g||g.length!==1)throw new Error("Pad requires 1 input");if(g[0].type!=="float32"&&g[0].type!=="float64")throw new Error("Invalid input type.")},t=g=>{if(!g||g.length!==2&&g.length!==3)throw new Error("Pad requires 2 or 3 inputs");if(g[1].type!=="int32")throw new Error("Invalid input type.");if(g.length>=3&&g[2].type==="string")throw new Error("Invalid input type.")},e=(g,m,b)=>{const _=(0,p.getGlsl)(g.session.backend.glContext.version),[v,w]=g.calculateTextureWidthAndHeight(m.dims,s.TextureType.unpacked),S=c.ShapeUtil.computeStrides(m.dims);switch(b.mode){case"constant":return r(_,m.dims,S,v,w,b.pads,b.value);case"reflect":return i(_,m.dims,S,v,w,b.pads);case"edge":return d(_,m.dims,S,v,w,b.pads);default:throw new Error("Invalid mode")}},r=(g,m,b,_,v,w,S)=>{const A=m.length;let O="";for(let x=A-1;x>=0;--x)O+=` - k = m[${x}] - ${w[x]}; - if (k < 0) return constant; - if (k >= ${m[x]}) return constant; - offset += k * ${b[x]}; - `;return` - float padA(int m[${A}]) { - const float constant = float(${S}); - int offset = 0; - int k = 0; - ${O} - vec2 coords = offsetToCoords(offset, ${_}, ${v}); - float value = getColorAsFloat(${g.texture2D}(A, coords)); - return value; - } - `},i=(g,m,b,_,v,w)=>{const S=m.length;let A="";for(let O=S-1;O>=0;--O)A+=` - k = m[${O}] - ${w[O]}; - if (k < 0) { k = -k; } - { - const int _2n_1 = ${2*(m[O]-1)}; - k = int( mod( float(k), float(_2n_1) ) ) ; - if(k >= ${m[O]}) { k = _2n_1 - k; } - } - offset += k * ${b[O]}; - `;return` - float padA(int m[${S}]) { - int offset = 0; - int k = 0; - ${A} - vec2 coords = offsetToCoords(offset, ${_}, ${v}); - float value = getColorAsFloat(${g.texture2D}(A, coords)); - return value; - } - `},d=(g,m,b,_,v,w)=>{const S=m.length;let A="";for(let O=S-1;O>=0;--O)A+=` - k = m[${O}] - ${w[O]}; - if (k < 0) k = 0; - if (k >= ${m[O]}) k = ${m[O]-1}; - offset += k * ${b[O]}; - `;return` - float padA(int m[${S}]) { - int offset = 0; - int k = 0; - ${A} - vec2 coords = offsetToCoords(offset, ${_}, ${v}); - float value = getColorAsFloat(${g.texture2D}(A, coords)); - return value; - } - `}},2143:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.globalMaxPool=n.parseMaxPoolAttributes=n.maxPool=n.parseGlobalAveragePoolAttributes=n.globalAveragePool=n.parseAveragePoolAttributes=n.averagePool=void 0;const u=a(246),c=a(2517),p=a(2039);n.averagePool=(d,g,m)=>{t(g);const b={name:"AveragePool",inputNames:["X"],inputTypes:[p.TextureType.unpacked],cacheHint:m.cacheKey};return[d.run(Object.assign(Object.assign({},b),{get:()=>s(g,b,!1,m)}),g)]},n.parseAveragePoolAttributes=d=>{const g=d.attributes.getString("auto_pad","NOTSET"),m=d.attributes.getInt("ceil_mode",0),b=d.attributes.getInt("count_include_pad",0)!==0,_=d.attributes.getInts("kernel_shape"),v=d.attributes.getInts("strides",[]),w=d.attributes.getInts("pads",[]);if(m!==0)throw new Error("using ceil() in shape computation is not yet supported for AveragePool");return(0,u.createAttributeWithCacheKey)({autoPad:g,ceilMode:m,countIncludePad:b,kernelShape:_,strides:v,pads:w})};const s=(d,g,m,b)=>{const[_,v]=f(d,b,m),w=c.ShapeUtil.size(_.kernelShape);let S="";_.countIncludePad?S+=`value /= float(${w});`:S+=`value /= float(${w} - pad);`;const A=` - ${e(d[0].dims,_,"value += _X(x);",S,"0.0")} - `;return Object.assign(Object.assign({},g),{output:{dims:v,type:d[0].type,textureType:p.TextureType.unpacked},shaderSource:A})};n.globalAveragePool=(d,g,m)=>{t(g);const b={name:"GlobalAveragePool",inputNames:["X"],inputTypes:[p.TextureType.unpacked],cacheHint:`${m.countIncludePad}`};return[d.run(Object.assign(Object.assign({},b),{get:()=>s(g,b,!0,m)}),g)]},n.parseGlobalAveragePoolAttributes=d=>{const g=d.attributes.getInt("count_include_pad",0)!==0;return(0,u.createAttributeWithCacheKey)({autoPad:"",ceilMode:0,countIncludePad:g,kernelShape:[],strides:[],pads:[]})},n.maxPool=(d,g,m)=>{t(g);const b={name:"MaxPool",inputNames:["X"],inputTypes:[p.TextureType.unpacked],cacheHint:m.cacheKey};return[d.run(Object.assign(Object.assign({},b),{get:()=>h(g,b,!1,m)}),g)]},n.parseMaxPoolAttributes=d=>{const g=d.attributes.getString("auto_pad","NOTSET"),m=d.attributes.getInt("ceil_mode",0),b=d.attributes.getInts("kernel_shape"),_=d.attributes.getInts("strides",[]),v=d.attributes.getInts("pads",[]),w=d.attributes.getInt("storage_order",0),S=d.attributes.getInts("dilations",[]);if(w!==0)throw new Error("column major storage order is not yet supported for MaxPool");if(m!==0)throw new Error("using ceil() in shape computation is not yet supported for MaxPool");return(0,u.createAttributeWithCacheKey)({autoPad:g,ceilMode:m,countIncludePad:!1,kernelShape:b,strides:_,pads:v,storageOrder:w,dilations:S})};const h=(d,g,m,b)=>{const[_,v]=f(d,b,m),w=` - ${e(d[0].dims,_,` - value = max(_X(x), value); - `,"","-1e5")} - `;return Object.assign(Object.assign({},g),{output:{dims:v,type:d[0].type,textureType:p.TextureType.unpacked},shaderSource:w})},f=(d,g,m)=>{const b=d[0].dims.slice(),_=Object.hasOwnProperty.call(g,"dilations"),v=g.kernelShape.slice(),w=g.strides.slice(),S=_?g.dilations.slice():[],A=g.pads.slice();c.PoolConvUtil.adjustPoolAttributes(m,b,v,w,S,A);const O=c.PoolConvUtil.computePoolOutputShape(m,b,w,S,v,A,g.autoPad),x=Object.assign({},g);return _?Object.assign(x,{kernelShape:v,strides:w,pads:A,dilations:S,cacheKey:g.cacheKey}):Object.assign(x,{kernelShape:v,strides:w,pads:A,cacheKey:g.cacheKey}),[x,O]},l={autoPad:"",ceilMode:0,countIncludePad:!1,kernelShape:[],strides:[],pads:[],storageOrder:0,dilations:[],cacheKey:""},o={name:"GlobalMaxPool",inputNames:["X"],inputTypes:[p.TextureType.unpacked]};n.globalMaxPool=(d,g)=>(t(g),[d.run(Object.assign(Object.assign({},o),{get:()=>h(g,o,!0,l)}),g)]);const t=d=>{if(!d||d.length!==1)throw new Error("Pool ops requires 1 input.");if(d[0].type!=="float32"&&d[0].type!=="float64")throw new Error("Invalid input type.")},e=(d,g,m,b,_)=>{const v=d.length;if(g.kernelShape.length<=2){const w=g.kernelShape[g.kernelShape.length-1],S=g.strides[g.strides.length-1],A=g.pads[g.pads.length/2-1],O=g.pads[g.pads.length-1],x=d[v-1];let I="",N="",B="";if(I=A+O!==0?` - for (int i = 0; i < ${w}; i++) { - x[${v} - 1] = indices[${v} - 1] * ${S} - ${A} + i; - if (x[${v} - 1] < 0 || x[${v} - 1] >= ${x}) { - pad++; - continue; - } - ${m} - }`:` - for (int i = 0; i < ${w}; i++) { - x[${v} - 1] = indices[${v} - 1] * ${S} - ${A} + i; - ${m} - }`,g.kernelShape.length===2){const L=g.kernelShape[g.kernelShape.length-2],F=g.strides[g.strides.length-2],H=g.pads[g.pads.length/2-2],D=g.pads[g.pads.length-2],j=d[v-2];N=H+D!==0?` - for (int j = 0; j < ${L}; j++) { - x[${v} - 2] = indices[${v} - 2] * ${F} - ${H} + j; - if (x[${v} - 2] < 0 || x[${v} - 2] >= ${j}) { - pad+= ${w}; - continue; - } - `:` - for (int j = 0; j < ${L}; j++) { - x[${v} - 2] = indices[${v} - 2] * ${F} - ${H} + j; - `,B=` - } - `}return` - float process(int indices[${v}]) { - int x[${v}]; - copyVec(indices, x); - - float value = ${_}; - int pad = 0; - ${N} - ${I} - ${B} - ${b} - return value; - } - `}{const w=c.ShapeUtil.size(g.kernelShape),S=c.ShapeUtil.computeStrides(g.kernelShape),A=S.length,O=g.pads.length,x=i(A),I=r(d,"inputDims"),N=r(g.pads,"pads"),B=r(S,"kernelStrides"),L=r(g.strides,"strides");let F="";return F=g.pads.reduce((H,D)=>H+D)?` - if (x[j] >= inputDims[j] || x[j] < 0) { - pad++; - isPad = true; - break; - } - } - if (!isPad) { - ${m} - }`:` - } - ${m} - `,` - ${x} - float process(int indices[${v}]) { - int x[${v}]; - copyVec(indices, x); - int offset[${A}]; - int pads[${O}]; - int inputDims[${v}]; - int kernelStrides[${A}]; - int strides[${A}]; - ${N} - ${I} - ${L} - ${B} - - float value = ${_}; - int pad = 0; - bool isPad = false; - for (int i = 0; i < ${w}; i++) { - offsetToIndices(i, kernelStrides, offset); - isPad = false; - for (int j = ${v} - ${A}; j < ${v}; j++) { - x[j] = indices[j] * strides[j - ${v} + ${A}] - + offset[j - ${v} + ${A}] - pads[j - 2]; - ${F} - } - ${b} - - return value; - } - `}},r=(d,g)=>{let m="";for(let b=0;b<d.length;b++)m+=` - ${g}[${b}] = ${d[b]}; - `;return m},i=d=>` - void offsetToIndices(int offset, int[${d}] strides, out int[${d}] indices) { - if (${d} == 0) { - return; - } - for (int i = 0; i < ${d} - 1; ++i) { - indices[i] = offset / strides[i]; - offset -= indices[i] * strides[i]; - } - indices[${d} - 1] = offset; - }`},4939:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.reduceLogSumSquare=n.reduceLogSum=n.reduceProd=n.reduceMin=n.reduceMax=n.reduceMean=n.reduceSum=n.parseReduceAttributes=void 0;const u=a(246),c=a(782),p=a(2517),s=a(2039),h=(o,t,e,r,i)=>{l(t);const d={name:r,inputNames:["A"],inputTypes:[s.TextureType.unpacked]};return[o.run(Object.assign(Object.assign({},d),{cacheHint:e.cacheKey,get:()=>f(o,t,e,r,i,d)}),t)]};n.parseReduceAttributes=o=>{const t=o.attributes.getInts("axes",[]),e=o.attributes.getInt("keepdims",1)===1;return(0,u.createAttributeWithCacheKey)({axes:t,keepDims:e})};const f=(o,t,e,r,i,d)=>{const g=[],m=t[0].dims.length||1,b=[],_=p.ShapeUtil.normalizeAxes(e.axes,t[0].dims.length),v=i(t,_);let w=v[1];for(let A=0;A<t[0].dims.length;A++)_.indexOf(A)>=0||_.length===0?(e.keepDims&&g.push(1),w=` - for(int j${A} = 0; j${A} < ${t[0].dims[A]}; j${A}++) { - inputIdx[${A}] = j${A}; - ${w} - }`):(b.push(`inputIdx[${A}] = outputIdx[${g.length}];`),g.push(t[0].dims[A]));const S=` - float process(int outputIdx[${g.length||1}]) { - float value; // final result - int inputIdx[${m}]; // addressing input data - ${b.join(` -`)} - ${v[0]} // init ops for reduce max/min - ${w} - ${v[2]} // final computation for reduce mean - return value; - }`;return Object.assign(Object.assign({},d),{output:{dims:g,type:t[0].type,textureType:s.TextureType.unpacked},shaderSource:S})},l=o=>{if(!o||o.length!==1)throw new Error("Reduce op requires 1 input.");if(c.NUMBER_TYPES.indexOf(o[0].type)===-1)throw new Error("Invalid input type.")};n.reduceSum=(o,t,e)=>h(o,t,e,"ReduceSum",()=>["value = 0.0;","value += _A(inputIdx);",""]),n.reduceMean=(o,t,e)=>h(o,t,e,"ReduceMean",(r,i)=>{let d=1;for(let g=0;g<r[0].dims.length;g++)(i.indexOf(g)>=0||i.length===0)&&(d*=r[0].dims[g]);return["value = 0.0;","value += _A(inputIdx);",`value /= ${d}.;`]}),n.reduceMax=(o,t,e)=>h(o,t,e,"ReduceMax",(r,i)=>{const d=[];for(let g=0;g<r[0].dims.length;g++)(i.indexOf(g)>=0||i.length===0)&&d.push(`inputIdx[${g}] = 0;`);return[`${d.join(` -`)} -value = _A(inputIdx);`,"value = max(value, _A(inputIdx));",""]}),n.reduceMin=(o,t,e)=>h(o,t,e,"ReduceMin",(r,i)=>{const d=[];for(let g=0;g<r[0].dims.length;g++)(i.indexOf(g)>=0||i.length===0)&&d.push(`inputIdx[${g}] = 0;`);return[`${d.join(` -`)} -value = _A(inputIdx);`,"value = min(value, _A(inputIdx));",""]}),n.reduceProd=(o,t,e)=>h(o,t,e,"ReduceProd",()=>["value = 1.0;","value *= _A(inputIdx);",""]),n.reduceLogSum=(o,t,e)=>h(o,t,e,"ReduceLogSum",()=>["value = 0.0;","value += _A(inputIdx);","value = log(value);"]),n.reduceLogSumSquare=(o,t,e)=>h(o,t,e,"ReduceLogSumSquare",()=>["float t; value = 0.0;","t = _A(inputIdx); value += t * t;",""])},7019:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.isReshapeCheap=n.processDims3D=n.createPackedReshape3DProgramInfoLoader=void 0;const u=a(2517),c=a(5060),p=a(2039),s=a(2827);n.createPackedReshape3DProgramInfoLoader=(h,f,l)=>{const o=(t=>({name:"Reshape (packed)",inputTypes:[p.TextureType.packed],inputNames:["A"],cacheHint:`${t}`}))(l);return Object.assign(Object.assign({},o),{get:()=>((t,e,r,i)=>{const d=e.dims,g=i;let m="";for(let v=0;v<4;v++){let w="";switch(v){case 0:w="outputCoords = rc;";break;case 1:w="outputCoords = ivec3(rc.x, rc.y+1, rc.z);";break;case 2:w="outputCoords = ivec3(rc.x, rc.y, rc.z+1);";break;case 3:w="outputCoords = ivec3(rc.x, rc.y+1, rc.z+1);";break;default:throw new Error}m+=` - ${w} - ${v>0?"if(outputCoords.y < rows && outputCoords.z < cols){":""} - int flattenedIndex = getFlattenedIndex(outputCoords); - - ivec3 inputRC = inputCoordsFromReshapedOutCoords(flattenedIndex); - vec2 innerDims = vec2(float(inputRC.y),float(inputRC.z)); - - result[${v}] = getChannel(getA(inputRC.x, inputRC.y, inputRC.z), innerDims); - - ${v>0?"}":""} - `}const b=(0,c.getGlsl)(t.session.backend.glContext.version),_=` - ${function(v){const w=u.ShapeUtil.computeStrides(v),S=["b","r","c"],A="index";return` - ivec3 inputCoordsFromReshapedOutCoords(int index) { - ${w.map((O,x)=>`int ${S[x]} = ${A} / ${O}; ${x===w.length-1?`int ${S[x+1]} = ${A} - ${S[x]} * ${O}`:`index -= ${S[x]} * ${O}`};`).join("")} - return ivec3(b, r, c); - } - `}(d)} - ${function(v){const w=u.ShapeUtil.computeStrides(v);return` - int getFlattenedIndex(ivec3 coords) { - // reverse y, z order - return coords.x * ${w[0]} + coords.z * ${w[1]} + coords.y; - } -`}(g)} - ${(0,s.unpackFromChannel)()} - - void main() { - ivec3 rc = getOutputCoords(); - - vec4 result = vec4(0.0); - - ivec3 outputCoords; - int rows = ${g[2]}; - int cols = ${g[1]}; - - ${m} - ${b.output} = result; - } - `;return Object.assign(Object.assign({},r),{output:{dims:g,type:e.type,textureType:p.TextureType.packed},shaderSource:_,hasMain:!0})})(h,f,o,l)})},n.processDims3D=function(h){if(h.length===0)return[1,1,1];let f=1;for(let l=0;l<h.length-2;++l)f*=h[l];return[f,h.length>1?h[h.length-2]:1,h[h.length-1]]},n.isReshapeCheap=function(h,f){let l=!1;return l=h.length===0||f.length===0||(h.length<2||f.length<2?h[h.length-1]===f[f.length-1]:h[h.length-1]===f[f.length-1]&&h[h.length-2]===f[f.length-2]),l}},718:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.reshape=void 0;const u=a(2517);n.reshape=(c,p)=>{const s=u.ShapeUtil.calculateReshapedDims(p[0].dims,p[1].integerData);return c.session.pack?[c.reshapePacked(p[0],s)]:[c.reshapeUnpacked(p[0],s)]}},2268:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseResizeAttributesV11=n.parseResizeAttributesV10=n.resize=void 0;const u=a(5060),c=a(2039),p=a(9390),s=a(2827),h=a(9793),f={name:"Resize",inputNames:["A"],inputTypes:[c.TextureType.packed]};n.resize=(r,i,d)=>((0,h.validateInputs)(i,d),[r.run(Object.assign(Object.assign({},f),{cacheHint:d.cacheKey,get:()=>l(r,i,d)}),i)]),n.parseResizeAttributesV10=r=>(0,h.parseUpsampleAttributes)(r,10),n.parseResizeAttributesV11=r=>(0,h.parseUpsampleAttributes)(r,11);const l=(r,i,d)=>{const g=(0,u.getGlsl)(r.session.backend.glContext.version),[m,b]=o(i,d);if(m.every(F=>F===1)&&d.coordinateTransformMode!=="tf_crop_and_resize")return Object.assign(Object.assign({},f),{output:{dims:b,type:i[0].type,textureType:c.TextureType.packed},hasMain:!0,shaderSource:`void main() { - vec4 v = ${g.texture2D}(X, TexCoords); - ${g.output} = v; - }`});const _=b.length;if(_<2)throw new Error(`output dimension should be at least 2, but got ${_}`);const v=b[_-2],w=b[_-1],S=i[0].dims;if(_!==S.length)throw new Error(`output dimension should match input ${S.length}, but got ${_}`);const A=S[_-2],O=S[_-1],x=m[_-2],I=m[_-1];let N="";if(d.mode!=="linear")throw new Error(`resize (packed) does not support mode: '${d.mode}'`);switch(d.coordinateTransformMode){case"asymmetric":N=` - vec4 getSourceFracIndex(ivec4 coords) { - return vec4(coords) / scaleWHWH; - } - `;break;case"half_pixel":N=` - vec4 getSourceFracIndex(ivec4 coords) { - return (vec4(coords) + 0.5) / scaleWHWH - 0.5; - } - `;break;case"pytorch_half_pixel":N=` - vec4 getSourceFracIndex(ivec4 coords) { - vec4 fcoords = vec4(coords); - return vec4( - ${w}.0 > 1.0 ? (fcoords.x + 0.5) / scaleWHWH.x - 0.5 : 0.0, - ${v}.0 > 1.0 ? (fcoords.y + 0.5) / scaleWHWH.y - 0.5 : 0.0, - ${w}.0 > 1.0 ? (fcoords.z + 0.5) / scaleWHWH.z - 0.5 : 0.0, - ${v}.0 > 1.0 ? (fcoords.w + 0.5) / scaleWHWH.w - 0.5 : 0.0 - ); - } - `;break;case"align_corners":N=` - vec4 getSourceFracIndex(ivec4 coords) { - vec4 resized = vec4(${w}.0 - 1.0, ${v}.0 - 1.0, ${w}.0 - 1.0, - ${v}.0 - 1.0); - vec4 original = vec4(${O}.0 - 1.0, ${A}.0 - 1.0, ${O}.0 - 1.0, - ${A}.0 - 1.0); - vec4 new_scale = original / resized; - return vec4(coords) * new_scale; - } - `;break;default:throw new Error(`resize (packed) does not support coordinateTransformMode: '${d.coordinateTransformMode}'`)}const B=(0,p.getCoordsDataType)(_),L=` - const vec2 inputWH = vec2(${A}.0, ${O}.0); - const vec4 scaleWHWH = vec4(float(${x}), float(${I}), float(${x}), float(${I})); - ${(0,s.unpackFromChannel)()} - ${N} - float getAValue(int x10, int r, int c, int d) { - return getChannel(getA(x10, r, c, d), vec2(c, d)); - } - void main() { - ${B} rc = getOutputCoords(); - - int batch = rc[0]; - int depth = rc[1]; - - // retrieve the 4 coordinates that is used in the 4 packed output values. - ivec4 coords = ivec4(rc.wz, rc.w + 1, rc.z + 1); - - // calculate the source index in fraction - vec4 sourceFrac = getSourceFracIndex(coords); - - // get the lower and upper bound of the 4 values that will be packed into one texel. - ivec4 x00 = ivec4(max(sourceFrac.xy, vec2(0.0)), min(inputWH - 1.0, ceil(sourceFrac.xy))); - ivec4 x01 = ivec4(max(sourceFrac.xw, vec2(0.0)), min(inputWH - 1.0, ceil(sourceFrac.xw))); - ivec4 x10 = ivec4(max(sourceFrac.zy, vec2(0.0)), min(inputWH - 1.0, ceil(sourceFrac.zy))); - ivec4 x11 = ivec4(max(sourceFrac.zw, vec2(0.0)), min(inputWH - 1.0, ceil(sourceFrac.zw))); - - bool hasNextRow = rc.w < ${v-1}; - bool hasNextCol = rc.z < ${w-1}; - - // pack x00, x01, x10, x11's top-left corner into one vec4 structure - vec4 topLeft = vec4( - getAValue(batch, depth, x00.x, x00.y), - hasNextCol ? getAValue(batch, depth, x01.x, x01.y) : 0.0, - hasNextRow ? getAValue(batch, depth, x10.x, x10.y) : 0.0, - (hasNextRow && hasNextCol) ? getAValue(batch, depth, x11.x, x11.y) : 0.0); - - // pack x00, x01, x10, x11's top-right corner into one vec4 structure - vec4 topRight = vec4( - getAValue(batch, depth, x00.x, x00.w), - hasNextCol ? getAValue(batch, depth, x01.x, x01.w) : 0.0, - hasNextRow ? getAValue(batch, depth, x10.x, x10.w) : 0.0, - (hasNextRow && hasNextCol) ? getAValue(batch, depth, x11.x, x11.w) : 0.0); - - // pack x00, x01, x10, x11's bottom-left corner into one vec4 structure - vec4 bottomLeft = vec4( - getAValue(batch, depth, x00.z, x00.y), - hasNextCol ? getAValue(batch, depth, x01.z, x01.y) : 0.0, - hasNextRow ? getAValue(batch, depth, x10.z, x10.y) : 0.0, - (hasNextRow && hasNextCol) ? getAValue(batch, depth, x11.z, x11.y) : 0.0); - - // pack x00, x01, x10, x11's bottom-right corner into one vec4 structure - vec4 bottomRight = vec4( - getAValue(batch, depth, x00.z, x00.w), - hasNextCol ? getAValue(batch, depth, x01.z, x01.w) : 0.0, - hasNextRow ? getAValue(batch, depth, x10.z, x10.w) : 0.0, - (hasNextRow && hasNextCol) ? getAValue(batch, depth, x11.z, x11.w) : 0.0); - - // calculate the interpolation fraction on u and v direction - vec4 frac = vec4(sourceFrac) - floor(sourceFrac); - vec4 clampFrac = clamp(frac, vec4(0.0), vec4(1.0)); - - vec4 top = mix(topLeft, topRight, clampFrac.ywyw); - vec4 bottom = mix(bottomLeft, bottomRight, clampFrac.ywyw); - vec4 newValue = mix(top, bottom, clampFrac.xxzz); - - ${g.output} = vec4(newValue); - } - `;return Object.assign(Object.assign({},f),{output:{dims:b,type:i[0].type,textureType:c.TextureType.packed},hasMain:!0,shaderSource:L})},o=(r,i)=>{const d=r[0].dims;let g,m=i.scales;if(m.length===0){const _=r[i.scalesInputIdx];if(_&&_.size!==0){if(r[i.sizesInputIdx])throw new Error("Only one of scales or sizes must be provided as input.");m=t(_,i.mode,i.isResize)}else{const v=r[i.sizesInputIdx];if(!v||v.size===0)throw new Error("Either scales or sizes MUST be provided as input.");g=Array.from(v.integerData),m=e(g,d,i.mode,i.isResize)}}else if(r[i.sizesInputIdx])throw new Error("Only one of scales or sizes must be provided as input.");const b=g||d.map((_,v)=>Math.floor(_*m[v]));return[m,b]},t=(r,i,d)=>{const g=Array.from(r.floatData);return(0,h.scalesValidation)(g,i,d),g},e=(r,i,d,g)=>{const m=i.length,b=new Array(m);for(let _=0,v=m;_<v;_++)if(i[_]===0){if(r[_]!==0)throw new Error("Input dim is zero but required output dim is non-zero.");b[_]=1}else b[_]=r[_]/i[_];return(0,h.scalesValidation)(b,d,g),b}},8117:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.shape=void 0;const u=a(9162);n.shape=(p,s)=>(c(s),[new u.Tensor([s[0].dims.length],"int32",void 0,void 0,new Int32Array(s[0].dims))]);const c=p=>{if(!p||p.length!==1)throw new Error("Shape requires 1 input.")}},2278:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.sliceV10=n.parseSliceAttributes=n.slice=void 0;const u=a(246),c=a(782),p=a(2517),s=a(2039),h={name:"Slice",inputNames:["A"],inputTypes:[s.TextureType.unpacked]};n.slice=(e,r,i)=>(l(r),[e.run(Object.assign(Object.assign({},h),{cacheHint:i.cacheKey,get:()=>f(e,r[0],i)}),r)]),n.parseSliceAttributes=e=>{const r=e.attributes.getInts("starts"),i=e.attributes.getInts("ends"),d=e.attributes.getInts("axes",[]);return(0,u.createAttributeWithCacheKey)({starts:r,ends:i,axes:d})};const f=(e,r,i)=>{const d=i.axes.length===0?r.dims.slice(0).map((S,A)=>A):i.axes,g=p.ShapeUtil.normalizeAxes(d,r.dims.length),m=i.starts.map((S,A)=>S>r.dims[g[A]]-1?r.dims[g[A]]:p.ShapeUtil.normalizeAxis(S,r.dims[g[A]])),b=i.ends.map((S,A)=>S>r.dims[g[A]]-1?r.dims[g[A]]:p.ShapeUtil.normalizeAxis(S,r.dims[g[A]])),_=r.dims.slice(),v=[];for(let S=0;S<g.length;S++)_[g[S]]=b[S]-m[S],m[S]>0&&v.push(`outputIdx[${g[S]}] += ${m[S]};`);const w=` - float process(int outputIdx[${_.length}]) { - ${v.join(` - `)} - return _A(outputIdx); - }`;return Object.assign(Object.assign({},h),{output:{dims:_,type:r.type,textureType:s.TextureType.unpacked},shaderSource:w})},l=e=>{if(!e||e.length!==1)throw new Error("Slice requires 1 input.");if(c.NUMBER_TYPES.indexOf(e[0].type)===-1)throw new Error("Invalid input type.")};n.sliceV10=(e,r)=>{t(r);const i=o(e,r);return[e.run(Object.assign(Object.assign({},h),{cacheHint:i.cacheKey,get:()=>f(e,r[0],i)}),[r[0]])]};const o=(e,r)=>{if(!e.session.isInitializer(r[1].dataId)||!e.session.isInitializer(r[2].dataId)||r.length>=4&&!e.session.isInitializer(r[3].dataId)||r.length>=5&&!e.session.isInitializer(r[4].dataId))throw new Error("dynamic slice attributes are not allowed");if(r.length>=5&&r[4].integerData.some(m=>m!==1))throw new Error("currently non-1 steps is not supported for Slice");const i=Array.from(r[1].integerData),d=Array.from(r[2].integerData),g=r.length>=4?Array.from(r[3].integerData):[];return{starts:i,ends:d,axes:g,cacheKey:`${g};${i};${d}`}},t=e=>{if(!e||e.length<3||e.length>5)throw new Error("Invalid input number.");if(e[1].type!=="int32"||e[1].dims.length!==1)throw new Error("Invalid input type.");if(e[2].type!=="int32"||e[2].dims.length!==1)throw new Error("Invalid input type.");if(e.length>=4&&(e[3].type!=="int32"||e[3].dims.length!==1))throw new Error("Invalid input type.");if(e.length>=5&&(e[4].type!=="int32"||e[4].dims.length!==1))throw new Error("Invalid input type.")}},5524:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.softmaxV13=n.parseSoftmaxAttributesV13=n.parseSoftmaxAttributes=n.softmax=void 0;const u=a(246),c=a(2517),p=a(5060),s=a(2039),h=a(3738),f={name:"SoftmaxComputeMax",inputNames:["A"],inputTypes:[s.TextureType.unpacked]},l={name:"SoftmaxComputeScale",inputNames:["A","Max"],inputTypes:[s.TextureType.unpacked,s.TextureType.unpacked]},o={name:"SoftMax",inputNames:["A","Max","Norm"],inputTypes:[s.TextureType.unpacked,s.TextureType.unpacked,s.TextureType.unpacked]};n.softmax=(g,m,b)=>{d(m);const _=m[0].dims.slice(),v=c.ShapeUtil.normalizeAxis(b.axis,_.length),w=c.ShapeUtil.sizeToDimension(_,v),S=c.ShapeUtil.sizeFromDimension(_,v);return t(g,m,b,w,S)},n.parseSoftmaxAttributes=g=>(0,u.createAttributeWithCacheKey)({axis:g.attributes.getInt("axis",1)}),n.parseSoftmaxAttributesV13=g=>(0,u.createAttributeWithCacheKey)({axis:g.attributes.getInt("axis",-1)}),n.softmaxV13=(g,m,b)=>{d(m);const _=m[0].dims.slice(),v=c.ShapeUtil.normalizeAxis(b.axis,_.length),w=_.length,S=v!==w-1,A=[];let O,x=[],I=[];S&&(x=Array.from({length:w}).map((F,H)=>H),x[v]=w-1,x[w-1]=v,x.map(F=>A.push(_[F])),O=(0,u.createAttributeWithCacheKey)({perm:x}),I=(0,h.transpose)(g,m,O));const N=S?c.ShapeUtil.sizeToDimension(A,w-1):c.ShapeUtil.sizeToDimension(_,w-1),B=S?c.ShapeUtil.sizeFromDimension(A,w-1):c.ShapeUtil.sizeFromDimension(_,w-1),L=t(g,S?I:m,b,N,B);return S?(0,h.transpose)(g,L,O):L};const t=(g,m,b,_,v)=>{const w=e(g,m[0],_,v,[_]),S=g.run(Object.assign(Object.assign({},f),{cacheHint:b.cacheKey,get:()=>w}),m),A=r(g,m[0],_,v,w.output.dims,[_]),O=g.run(Object.assign(Object.assign({},l),{cacheHint:b.cacheKey,get:()=>A}),[m[0],S]),x=i(g,m[0],_,v,w.output.dims,A.output.dims);return[g.run(Object.assign(Object.assign({},o),{cacheHint:b.cacheKey,get:()=>x}),[m[0],S,O])]},e=(g,m,b,_,v)=>{const[w,S]=g.calculateTextureWidthAndHeight(m.dims,s.TextureType.unpacked),A=v.length;if(b<1||_<1)throw new Error("Logical row count N and feature count D must be greater than or equal to 1");if(v.length!==1)throw new Error("Dimensionality of the output should be 1");if(v[0]!==b)throw new Error("Shape of the output should be equal to logical row count");const O=(0,p.getGlsl)(g.session.backend.glContext.version),x=` - float process(int[${A}] indices) { - int logical_row_start_offset = indices[0] * ${_}; - - float max = getColorAsFloat(${O.texture2D}(A, offsetToCoords(logical_row_start_offset, ${w}, - ${S} ))); - for(int i=1; i<${_}; ++i) - { - float current = getColorAsFloat(${O.texture2D}(A, offsetToCoords(logical_row_start_offset + i, - ${w}, ${S}))); - if(current > max) - max = current; - } - - return max; - }`;return Object.assign(Object.assign({},f),{output:{dims:v,type:m.type,textureType:s.TextureType.unpacked},shaderSource:x})},r=(g,m,b,_,v,w)=>{const[S,A]=g.calculateTextureWidthAndHeight(m.dims,s.TextureType.unpacked),O=w.length;if(b<1||_<1)throw new Error("Logical row count N and feature count D must be greater than or equal to 1");if(w.length!==1)throw new Error("Dimensionality of the output should be 1");if(w[0]!==b)throw new Error("Shape of the output should be equal to logical row count");if(v.length!==1)throw new Error("Dimensionality of the intermediate results should be 1");if(v[0]!==b)throw new Error("Shape of the intermediate results should be equal to logical row count");const x=` - float process(int[${O}] indices) { - int logical_row_start_offset = indices[0] * ${_}; - - float norm_factor = 0.0; - float max = _Max(indices); - for(int i=0; i<${_}; ++i) - { - norm_factor += exp(getColorAsFloat(${(0,p.getGlsl)(g.session.backend.glContext.version).texture2D}(A, offsetToCoords(logical_row_start_offset + i, - ${S}, ${A}))) - max); - } - - return norm_factor; - }`;return Object.assign(Object.assign({},l),{output:{dims:w,type:m.type,textureType:s.TextureType.unpacked},shaderSource:x})},i=(g,m,b,_,v,w)=>{const[S,A]=g.calculateTextureWidthAndHeight(m.dims,s.TextureType.unpacked),O=m.dims.length;if(b<1||_<1)throw new Error("Logical row count N and feature count D must be greater than or equal to 1");if(v.length!==1||w.length!==1)throw new Error("Dimensionality of the intermediate results should be 1");if(v[0]!==b||w[0]!==b)throw new Error("Shape of the intermediate results should be equal to logical row count");const x=` - float process(int[${O}] indices) { - - // get offset of current logical tensor index from the 2-D texture coordinates (TexCoords) - int offset = coordsToOffset(TexCoords, ${S}, ${A}); - - //determine the logical row for this index - int logical_row_index[1]; - logical_row_index[0] = offset / ${_}; - - float norm_factor = _Norm(logical_row_index); - - // avoid possible division by 0 - // if norm_facor is 0, all elements are zero - // if so, return 0 - if(norm_factor == 0.0) - return 0.0; - - return exp(_A(indices) - _Max(logical_row_index)) / norm_factor; - }`;return Object.assign(Object.assign({},o),{output:{dims:m.dims,type:m.type,textureType:s.TextureType.unpacked},shaderSource:x})},d=g=>{if(!g||g.length!==1)throw new Error("Softmax requires 1 input.");if(g[0].type!=="float32"&&g[0].type!=="float64")throw new Error("Invalid input type")}},5975:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseSplitAttributes=n.split=void 0;const u=a(246),c=a(2517),p=a(2039),s={name:"Split",inputNames:["A"],inputTypes:[p.TextureType.unpacked]};n.split=(o,t,e)=>{l(t);const r=c.ShapeUtil.normalizeAxis(e.axis,t[0].dims.length),i=h(o,t,r,e),d=[];for(let g=0;g<i;++g)d.push(o.run(Object.assign(Object.assign({},s),{cacheHint:`${e.cacheKey};${g}`,get:()=>f(o,t[0],e,r,g)}),t));return d},n.parseSplitAttributes=o=>{const t=o.attributes.getInt("axis",0),e=o.attributes.getInts("split",[]),r=o.outputs.length;return(0,u.createAttributeWithCacheKey)({axis:t,split:e,numOutputs:r})};const h=(o,t,e,r)=>{const[,i]=c.SplitUtil.splitShape(t[0].dims,e,r.split,r.numOutputs);return i.length},f=(o,t,e,r,i)=>{const[d,g]=c.SplitUtil.splitShape(t.dims,r,e.split,e.numOutputs),m=g[i],b=d[i],_=` - float process(int indices[${b.length}]) { - indices[${r}] += ${m}; - return _A(indices); - } - `;return Object.assign(Object.assign({},s),{cacheHint:`${e.cacheKey}:${i}`,output:{dims:b,type:t.type,textureType:p.TextureType.unpacked},shaderSource:_})},l=o=>{if(!o||o.length!==1)throw new Error("Split requires one input.");if(o[0].type!=="int8"&&o[0].type!=="uint8"&&o[0].type!=="int16"&&o[0].type!=="uint16"&&o[0].type!=="int32"&&o[0].type!=="uint32"&&o[0].type!=="float32"&&o[0].type!=="float64"&&o[0].type!=="bool")throw new Error("Invalid input type.")}},3933:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseSqueezeAttributes=n.squeezeV13=n.squeeze=void 0;const u=a(2517);n.squeeze=(s,h,f)=>{c(h);const l=u.ShapeUtil.squeezeShape(h[0].dims,f);return[s.reshapeUnpacked(h[0],l)]},n.squeezeV13=(s,h)=>(p(h),(0,n.squeeze)(s,[h[0]],Array.from(h[1].integerData))),n.parseSqueezeAttributes=s=>s.attributes.getInts("axes");const c=s=>{if(!s||s.length!==1)throw new Error("Squeeze requires 1 input.");if(s[0].type==="string")throw new Error("invalid input tensor types.")},p=s=>{if(!s||s.length!==2)throw new Error("Squeeze requires 2 inputs.");if(s[1].type!=="int32")throw new Error("Invalid input type.")}},6558:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.sum=void 0;const u=a(5060),c=a(2039);n.sum=(h,f)=>{s(f);const l={name:"Sum",inputNames:f.map((o,t)=>`X${t}`),inputTypes:new Array(f.length).fill(c.TextureType.unpacked)};return[h.run(Object.assign(Object.assign({},l),{get:()=>p(h,f,l)}),f)]};const p=(h,f,l)=>{const o=(0,u.getGlsl)(h.session.backend.glContext.version),t=f[0].dims.slice(),e=` - void main() { - vec4 result = ${f.map((r,i)=>`${o.texture2D}(X${i},TexCoords)`).join(" + ")}; - ${o.output} = result; - } - `;return Object.assign(Object.assign({},l),{output:{dims:t,type:f[0].type,textureType:c.TextureType.unpacked},hasMain:!0,shaderSource:e})},s=h=>{if(!h||h.length===0)throw new Error("Sum requires inputs.");const f=h[0].dims.length;for(let l=1;l<h.length;l++){if(f!==h[l].dims.length)throw new Error("Input shapes are mismatched.");for(let o=0;o<f;o++)if(h[0].dims[o]!==h[l].dims[o])throw new Error("Input shapes are not matched.")}if(h[0].type!=="float32"&&h[0].type!=="float64")throw new Error("Invalid input type.");for(let l=1;l<h.length;l++)if(h[0].type!==h[l].type)throw new Error("Input types are not matched.")}},5723:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.tile=void 0;const u=a(782),c=a(2039);n.tile=(h,f)=>{s(f);const l={name:"Tile",inputNames:["A"],inputTypes:[c.TextureType.unpacked]};return[h.run(Object.assign(Object.assign({},l),{get:()=>p(h,f,l)}),f)]};const p=(h,f,l)=>{const o=f[0].dims.slice(),t=new Array(o.length),e=[];for(let d=0;d<o.length;d++)t[d]=o[d]*f[1].numberData[d],e.push(`inputIdx[${d}] = int(mod(float(outputIdx[${d}]), ${o[d]}.));`);const r=t.length,i=` - float process(int outputIdx[${r}]) { - int inputIdx[${r}]; - ${e.join(` -`)} - return _A(inputIdx); - } - `;return Object.assign(Object.assign({},l),{output:{dims:t,type:f[0].type,textureType:c.TextureType.unpacked},shaderSource:i})},s=h=>{if(!h||h.length!==2)throw new Error("Tile requires 2 input.");if(h[1].dims.length!==1)throw new Error("The second input shape must 1 dimension.");if(h[1].dims[0]!==h[0].dims.length)throw new Error("Invalid input shape.");if(u.NUMBER_TYPES.indexOf(h[0].type)===-1)throw new Error("Invalid input type.");if(h[1].type!=="int32"&&h[1].type!=="int16")throw new Error("Invalid repeat type.")}},3738:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseTransposeAttributes=n.transpose=void 0;const u=a(246),c=a(2517),p=a(2039),s={name:"Transpose",inputNames:["A"],inputTypes:[p.TextureType.unpacked]};n.transpose=(e,r,i)=>(t(r),[e.run(Object.assign(Object.assign({},s),{cacheHint:i.cacheKey,get:()=>h(e,r[0],i.perm)}),r)]),n.parseTransposeAttributes=e=>(0,u.createAttributeWithCacheKey)({perm:e.attributes.getInts("perm",[])});const h=(e,r,i)=>{const d=r.dims;i=f(d,i);const g=l(d,i),m=d.length,b=` - ${o("perm",i,m)} - float process(int indices[${m}]) { - int a[${m}]; - perm(a, indices); - return _A(a); - }`;return Object.assign(Object.assign({},s),{output:{dims:g,type:r.type,textureType:p.TextureType.unpacked},shaderSource:b})},f=(e,r)=>(r&&r.length!==e.length&&(r=[...e.keys()].reverse()),r),l=(e,r)=>(r=f(e,r),c.ShapeUtil.sortBasedOnPerm(e,r)),o=(e,r,i)=>{const d=[];d.push(`void ${e}(out int a[${i}], int src[${i}]) {`);for(let g=0;g<i;++g)d.push(` a[${r[g]}]=src[${g}];`);return d.push(" }"),d.join(` -`)},t=e=>{if(!e||e.length!==1)throw new Error("Transpose requires 1 input.");if(e[0].type!=="float32"&&e[0].type!=="float64")throw new Error("input should be float tensor")}},8710:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.encodeAsUint8=void 0;const u=a(5060),c=a(2039);n.encodeAsUint8=(p,s)=>{const h=s.shape,f=(0,u.getGlsl)(p.session.backend.glContext.version),l=` - const float FLOAT_MAX = 1.70141184e38; - const float FLOAT_MIN = 1.17549435e-38; - - bool isNaN(float val) { - return (val < 1.0 || 0.0 < val || val == 0.0) ? false : true; - } - - highp vec4 encodeAsUint8(highp float v) { - if (isNaN(v)) { - return vec4(255, 255, 255, 255); - } - - highp float av = abs(v); - - if(av < FLOAT_MIN) { - return vec4(0.0, 0.0, 0.0, 0.0); - } else if(v > FLOAT_MAX) { - return vec4(0.0, 0.0, 128.0, 127.0) / 255.0; - } else if(v < -FLOAT_MAX) { - return vec4(0.0, 0.0, 128.0, 255.0) / 255.0; - } - - highp vec4 c = vec4(0,0,0,0); - - highp float e = floor(log2(av)); - highp float m = exp2(fract(log2(av))) - 1.0; - - c[2] = floor(128.0 * m); - m -= c[2] / 128.0; - c[1] = floor(32768.0 * m); - m -= c[1] / 32768.0; - c[0] = floor(8388608.0 * m); - - highp float ebias = e + 127.0; - c[3] = floor(ebias / 2.0); - ebias -= c[3] * 2.0; - c[2] += floor(ebias) * 128.0; - - c[3] += 128.0 * step(0.0, -v); - - return c / 255.0; - } - - void main() { - float value = ${f.texture2D}(X,TexCoords).r; - ${f.output} = encodeAsUint8(value); - }`,o={name:"Uint8Encode",inputTypes:[c.TextureType.unpacked],inputNames:["X"],output:{dims:h,type:s.tensor.type,textureType:c.TextureType.downloadUint8AsFloat},shaderSource:l,hasMain:!0};return p.executeProgram(o,[s.tensor])}},4909:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.tanh=n.tan=n.sqrt=n.sin=n.sigmoid=n.relu=n.not=n.neg=n.log=n.parseLeakyReluAttributes=n.leakyRelu=n.identity=n.floor=n.exp=n.parseEluAttributes=n.elu=n.cos=n.ceil=n.clipV11=n.parseClipAttributes=n.clip=n.atan=n.asin=n.acos=n.abs=n.glslTanh=n.glslTan=n.glslSqrt=n.glslSigmoid=n.glslRelu=n.glslSin=n.glslNot=n.glslNeg=n.glslLog=n.glslLeakyRelu=n.glslIdentity=n.glslClip=n.glslFloor=n.glslExp=n.glslElu=n.glslCos=n.glslCeil=n.glslAtan=n.glslAsin=n.glslAcos=n.glslAbs=void 0;const u=a(246),c=a(2517),p=a(8520),s=a(5060),h=a(2039);function f(){return L("abs")}function l(){return L("acos")}function o(){return L("asin")}function t(){return L("atan")}function e(){return L("ceil")}function r(){return L("cos")}function i(D){const j="elu";return{body:` - const float alpha = float(${D}); - - float ${j}_(float a) { - return a >= 0.0 ? a: (exp(a) - 1.0) * alpha; - } - vec4 ${j}_(vec4 v) { - return vec4(${j}_(v.x), ${j}_(v.y), ${j}_(v.z), ${j}_(v.w)); - } - `,name:j,type:p.FunctionType.ValueBased}}function d(){return L("exp")}function g(){return L("floor")}function m(D,j){const Z="clip";return{body:` - const float min = float(${D}); - const float max = float(${j}); - - float ${Z}_(float a) { - return clamp(a, min, max); - } - vec4 ${Z}_(vec4 v) { - return clamp(v, min, max); - } - `,name:Z,type:p.FunctionType.ValueBased}}function b(){const D="indentity";return{body:` - float ${D}_(float a) { - return a; - } - vec4 ${D}_(vec4 v) { - return v; - } - `,name:D,type:p.FunctionType.ValueBased}}function _(D){const j="leakyRelu";return{body:` - const float alpha = float(${D}); - - float ${j}_(float a) { - return a < 0.0 ? a * alpha : a; - } - vec4 ${j}_(vec4 v) { - return vec4(${j}_(v.x), ${j}_(v.y), ${j}_(v.z), ${j}_(v.w)); - } - `,name:j,type:p.FunctionType.ValueBased}}function v(){return L("log")}function w(){const D="neg";return{body:` - float ${D}_(float a) { - return -a; - } - vec4 ${D}_(vec4 v) { - return -v; - } - `,name:D,type:p.FunctionType.ValueBased}}function S(){const D="not";return{body:` - float ${D}_(float a) { - return float( ! bool(a) ); - } - bool ${D}_(bool a) { - return !a; - } - vec4 ${D}_(vec4 v) { - return vec4(!bool(v.x), !bool(v.y), !bool(v.z), !bool(v.w)); - } - bvec4 ${D}_(bvec4 v) { - return bvec4(!v.x, !v.y, !v.z, !v.w); - } - `,name:D,type:p.FunctionType.ValueBased}}function A(){return L("sin")}function O(){const D="relu";return{body:` - float ${D}_(float a) { - return max( a, 0.0 ); - } - vec4 ${D}_(vec4 v) { - return max( v, 0.0 ); - } - `,name:D,type:p.FunctionType.ValueBased}}function x(){const D="sigmoid";return{body:` - float ${D}_(float a) { - return 1.0 / (1.0 + exp(-a)); - } - vec4 ${D}_(vec4 v) { - return 1.0 / (1.0 + exp(-v)); - } - `,name:D,type:p.FunctionType.ValueBased}}function I(){return L("sqrt")}function N(){return L("tan")}function B(){const D="tanh";return{body:` - float ${D}_(float a) { - a = clamp(a, -10., 10.); - a = exp(2.*a); - return (a - 1.) / (a + 1.); - } - vec4 ${D}_(vec4 v) { - v = clamp(v, -10., 10.); - v = exp(2.*v); - return (v - 1.) / (v + 1.); - } - `,name:D,type:p.FunctionType.ValueBased}}function L(D){return{body:` - float ${D}_(float a) { - return ${D}(a); - } - vec4 ${D}_(vec4 v) { - return ${D}(v); - } - `,name:D,type:p.FunctionType.ValueBased}}n.glslAbs=f,n.glslAcos=l,n.glslAsin=o,n.glslAtan=t,n.glslCeil=e,n.glslCos=r,n.glslElu=i,n.glslExp=d,n.glslFloor=g,n.glslClip=m,n.glslIdentity=b,n.glslLeakyRelu=_,n.glslLog=v,n.glslNeg=w,n.glslNot=S,n.glslSin=A,n.glslRelu=O,n.glslSigmoid=x,n.glslSqrt=I,n.glslTan=N,n.glslTanh=B;const F=(D,j,Z,X)=>{const J=D.session.pack?h.TextureType.packed:h.TextureType.unpacked,ee={name:Z.name,inputTypes:[J],inputNames:["A"],cacheHint:X};return Object.assign(Object.assign({},ee),{get:()=>((ue,Ae,ve,oe)=>{const _e=ue.session.pack?h.TextureType.packed:h.TextureType.unpacked,be=(0,s.getGlsl)(ue.session.backend.glContext.version);return Object.assign(Object.assign({},Ae),{output:{dims:ve.dims,type:ve.type,textureType:_e},shaderSource:` - ${oe.body} - void main() { - vec4 v = ${be.texture2D}(A, TexCoords); - v = ${oe.name}_(v); - ${be.output} = v; - } - `,hasMain:!0})})(D,ee,j,Z)})};n.abs=(D,j)=>[D.run(F(D,j[0],f()),j)],n.acos=(D,j)=>[D.run(F(D,j[0],l()),j)],n.asin=(D,j)=>[D.run(F(D,j[0],o()),j)],n.atan=(D,j)=>[D.run(F(D,j[0],t()),j)],n.clip=(D,j,Z)=>[D.run(F(D,j[0],m(Z.min,Z.max),Z.cacheKey),j)],n.parseClipAttributes=D=>(0,u.createAttributeWithCacheKey)({min:D.attributes.getFloat("min",c.MIN_CLIP),max:D.attributes.getFloat("max",c.MAX_CLIP)}),n.clipV11=(D,j)=>{const Z=H(D,j);return(0,n.clip)(D,[j[0]],Z)};const H=(D,j)=>{if(j.length>=3&&(!D.session.isInitializer(j[1].dataId)||!D.session.isInitializer(j[2].dataId)))throw new Error("dynamic clip attributes are not allowed");const Z=j.length>=3?j[1].numberData[0]:c.MIN_CLIP,X=j.length>=3?j[2].numberData[0]:c.MAX_CLIP;return(0,u.createAttributeWithCacheKey)({min:Z,max:X})};n.ceil=(D,j)=>[D.run(F(D,j[0],e()),j)],n.cos=(D,j)=>[D.run(F(D,j[0],r()),j)],n.elu=(D,j,Z)=>[D.run(F(D,j[0],i(Z.alpha),Z.cacheKey),j)],n.parseEluAttributes=D=>(0,u.createAttributeWithCacheKey)({alpha:D.attributes.getFloat("alpha",1)}),n.exp=(D,j)=>[D.run(F(D,j[0],d()),j)],n.floor=(D,j)=>[D.run(F(D,j[0],g()),j)],n.identity=(D,j)=>[D.run(F(D,j[0],b()),j)],n.leakyRelu=(D,j,Z)=>[D.run(F(D,j[0],_(Z.alpha),Z.cacheKey),j)],n.parseLeakyReluAttributes=D=>(0,u.createAttributeWithCacheKey)({alpha:D.attributes.getFloat("alpha",.01)}),n.log=(D,j)=>[D.run(F(D,j[0],v()),j)],n.neg=(D,j)=>[D.run(F(D,j[0],w()),j)],n.not=(D,j)=>[D.run(F(D,j[0],S()),j)],n.relu=(D,j)=>[D.run(F(D,j[0],O()),j)],n.sigmoid=(D,j)=>[D.run(F(D,j[0],x()),j)],n.sin=(D,j)=>[D.run(F(D,j[0],A()),j)],n.sqrt=(D,j)=>[D.run(F(D,j[0],I()),j)],n.tan=(D,j)=>[D.run(F(D,j[0],N()),j)],n.tanh=(D,j)=>[D.run(F(D,j[0],B()),j)]},5611:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createUnpackProgramInfoLoader=n.createUnpackProgramInfo=void 0;const u=a(5060),c=a(2039),p=a(9390),s=a(2827),h={name:"unpack",inputNames:["A"],inputTypes:[c.TextureType.packed]};n.createUnpackProgramInfo=(f,l)=>{const o=l.dims.length,t=(0,s.getChannels)("rc",o),e=t.slice(-2),r=(0,p.getCoordsDataType)(o),i=(0,s.unpackFromChannel)(),d=l.dims.length===0?"":function(b,_){if(b===1)return"rc";let v="";for(let w=0;w<b;w++)v+=_[w],w<b-1&&(v+=",");return v}(o,t),g=o<=1?"rc":`vec2(${e.join(",")})`,m=` - ${i} - void main() { - ${r} rc = getOutputCoords(); - - // Sample the texture with the coords to get the rgba channel value. - vec4 packedInput = getA(${d}); - - ${(0,u.getGlsl)(f.session.backend.glContext.version).output} = vec4(getChannel(packedInput, ${g}), 0, 0, 0); - } - `;return Object.assign(Object.assign({},h),{hasMain:!0,output:{dims:l.dims,type:l.type,textureType:c.TextureType.unpacked},shaderSource:m})},n.createUnpackProgramInfoLoader=(f,l)=>Object.assign(Object.assign({},h),{get:()=>(0,n.createUnpackProgramInfo)(f,l)})},8428:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.parseUnsqueezeAttributes=n.unsqueezeV13=n.unsqueeze=void 0;const u=a(2517);n.unsqueeze=(s,h,f)=>{c(h);const l=u.ShapeUtil.unsqueezeShape(h[0].dims,f);return[s.reshapeUnpacked(h[0],l)]},n.unsqueezeV13=(s,h)=>(p(h),(0,n.unsqueeze)(s,[h[0]],Array.from(h[1].integerData))),n.parseUnsqueezeAttributes=s=>s.attributes.getInts("axes");const c=s=>{if(!s||s.length!==1)throw new Error("Unsqueeze requires 1 input.");if(s[0].type==="string")throw new Error("invalid input tensor types.")},p=s=>{if(!s||s.length!==2)throw new Error("Unsqueeze requires 2 inputs.");if(s[1].type!=="int32")throw new Error("Invalid input type.")}},9793:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.scalesValidation=n.validateInputs=n.parseUpsampleAttributes=n.parseUpsampleAttributesV9=n.parseUpsampleAttributesV7=n.upsample=void 0;const u=a(246),c=a(5060),p=a(2039),s={name:"Upsample",inputNames:["X"],inputTypes:[p.TextureType.unpacked]};n.upsample=(f,l,o)=>((0,n.validateInputs)(l,o),[f.run(Object.assign(Object.assign({},s),{cacheHint:o.cacheKey,get:()=>h(f,l,o)}),l)]),n.parseUpsampleAttributesV7=f=>(0,n.parseUpsampleAttributes)(f,7),n.parseUpsampleAttributesV9=f=>(0,n.parseUpsampleAttributes)(f,9),n.parseUpsampleAttributes=(f,l)=>{const o=l>=10,t=f.attributes.getString("mode","nearest");if(t!=="nearest"&&t!=="linear"&&(l<11||t!=="cubic"))throw new Error(`unrecognized mode: ${t}`);let e=[];l<9&&(e=f.attributes.getFloats("scales"),(0,n.scalesValidation)(e,t,o));const r=f.attributes.getFloat("extrapolation_value",0),i=l>10?f.attributes.getString("coordinate_transformation_mode","half_pixel"):"asymmetric";if(["asymmetric","pytorch_half_pixel","tf_half_pixel_for_nn","align_corners","tf_crop_and_resize","half_pixel"].indexOf(i)===-1)throw new Error(`coordinate_transform_mode '${i}' is not supported`);const d=i==="tf_crop_and_resize",g=d,m=t==="nearest"&&l>=11?f.attributes.getString("nearest_mode","round_prefer_floor"):"";if(["round_prefer_floor","round_prefer_ceil","floor","ceil",""].indexOf(m)===-1)throw new Error(`nearest_mode '${m}' is not supported`);const b=f.attributes.getFloat("cubic_coeff_a",-.75),_=f.attributes.getInt("exclude_outside",0)!==0;if(_&&t!=="cubic")throw new Error("exclude_outside can be set to 1 only when mode is CUBIC.");const v=l<11||t==="nearest"&&i==="asymmetric"&&m==="floor";let w=0,S=0,A=0;return l>10?f.inputs.length>2?(w=1,S=2,A=3):(S=1,A=2):l===9&&(S=1),(0,u.createAttributeWithCacheKey)({opset:l,isResize:o,mode:t,scales:e,extrapolationValue:r,coordinateTransformMode:i,useExtrapolation:g,needRoiInput:d,nearestMode:m,cubicCoefficientA:b,excludeOutside:_,useNearest2xOptimization:v,roiInputIdx:w,scalesInputIdx:S,sizesInputIdx:A})};const h=(f,l,o)=>{const t=(0,c.getGlsl)(f.session.backend.glContext.version),[e,r]=f.calculateTextureWidthAndHeight(l[0].dims,p.TextureType.unpacked),i=l[0].dims.map((A,O)=>Math.floor(A*o.scales[O])),[d,g]=f.calculateTextureWidthAndHeight(i,p.TextureType.unpacked),m=i.length,b=new Array(m),_=new Array(m);let v=` - int output_pitches[${m}]; - int input_pitches[${m}]; - `;for(let A=m-1;A>=0;A--)b[A]=A===m-1?1:b[A+1]*i[A+1],_[A]=A===m-1?1:_[A+1]*l[0].dims[A+1],v+=` - output_pitches[${A}] = ${b[A]}; - input_pitches[${A}] = ${_[A]}; - `;const w=` - float getInputFloat(int index) { - vec2 coords = offsetToCoords(index, ${e}, ${r}); - float value = getColorAsFloat(${t.texture2D}(X, coords)); - return value; - } - `,S=o.mode==="nearest"?` - ${w} - float process(int indices[${m}]) { - int input_index = 0; - int output_index = coordsToOffset(TexCoords, ${d}, ${g}); - - ${v} - - int d, m; - for (int dim = 0; dim < ${m}; ++dim) { - d = output_index / output_pitches[dim]; - m = output_index - d * output_pitches[dim]; - output_index = m; - - if (scales[dim] != 1 && d > 0) { - int d2 = d / scales[dim]; - m = d - d2 * scales[dim]; - d = d2; - } - input_index += input_pitches[dim] * d; - } - - return getInputFloat(input_index); - }`:m===4?` - ${w} - float process(int indices[4]) { - int input_index = 0; - int output_index = coordsToOffset(TexCoords, ${d}, ${g}); - - ${v} - - int m; - int index_of_dim0, index_of_dim1, index_of_dim2, index_of_dim3; - index_of_dim0 = output_index / output_pitches[0]; - m = output_index - index_of_dim0 * output_pitches[0]; - index_of_dim1 = m / output_pitches[1]; - m = m - index_of_dim1 * output_pitches[1]; - index_of_dim2 = m / output_pitches[2]; - m = m - index_of_dim2 * output_pitches[2]; - index_of_dim3 = m; - - int index_of_input_dim2, index_of_input_dim3, x_offset, y_offset; - index_of_input_dim2 = index_of_dim2 / scales[2]; - y_offset = index_of_dim2 - index_of_input_dim2 * scales[2]; - index_of_input_dim3 = index_of_dim3 / scales[3]; - x_offset = index_of_dim3 - index_of_input_dim3 * scales[3]; - - input_index = index_of_dim0 * input_pitches[0] + - index_of_dim1 * input_pitches[1] + - index_of_input_dim2 * input_pitches[2] + - index_of_input_dim3; - - float x00 = getInputFloat(input_index); - float x10, x01, x11; - - bool end_of_dim2 = false; - if (index_of_input_dim2 == (${l[0].dims[2]} - 1)) { - // It's the end in dimension 2 - x01 = x00; - end_of_dim2 = true; - } else { - x01 = getInputFloat(input_index + input_pitches[2]); - } - - if (index_of_input_dim3 == (input_pitches[2] - 1)) { - // It's the end in dimension 3 - x10 = x00; - x11 = x01; - } - else { - x10 = getInputFloat(input_index + 1); - x11 = end_of_dim2 ? x10 : getInputFloat(input_index + input_pitches[2] + 1); - } - - float y0 = x00 + float(y_offset) * (x01 - x00) / float(scales[2]); - float y1 = x10 + float(y_offset) * (x11 - x10) / float(scales[2]); - return y0 + float(x_offset) * (y1 - y0) / float(scales[3]); - }`:` - ${w} - float process(int indices[2]) { - int input_index = 0; - int output_index = coordsToOffset(TexCoords, ${d}, ${g}); - - ${v} - - int m; - int index_of_dim0, index_of_dim1; - index_of_dim0 = output_index / output_pitches[0]; - m = output_index - index_of_dim0 * output_pitches[0]; - index_of_dim1 = m; - - int index_of_input_dim0, index_of_input_dim1, x_offset, y_offset; - index_of_input_dim0 = index_of_dim0 / scales[0]; - y_offset = index_of_dim0 - index_of_input_dim0 * scales[0]; - index_of_input_dim1 = index_of_dim1 / scales[1]; - x_offset = index_of_dim1 - index_of_input_dim1 * scales[1]; - - input_index = index_of_input_dim0 * input_pitches[0] + index_of_input_dim1; - - float x00 = getInputFloat(input_index); - float x10, x01, x11; - - bool end_of_dim0 = false; - if (index_of_input_dim0 == (${l[0].dims[0]} - 1)) { - // It's the end in dimension 0 - x01 = x00; - end_of_dim0 = true; - } else { - x01 = getInputFloat(input_index + input_pitches[0]); - } - - if (index_of_input_dim1 == (input_pitches[0] - 1)) { - // It's the end in dimension 1 - x10 = x00; - x11 = x01; - } - else { - x10 = getInputFloat(input_index + 1); - x11 = end_of_dim0 ? x10 : getInputFloat(input_index + input_pitches[0] + 1); - } - - float y0 = x00 + float(y_offset) * (x01 - x00) / float(scales[0]); - float y1 = x10 + float(y_offset) * (x11 - x10) / float(scales[0]); - return y0 + float(x_offset) * (y1 - y0) / float(scales[1]); - }`;return Object.assign(Object.assign({},s),{output:{dims:i,type:l[0].type,textureType:p.TextureType.unpacked},shaderSource:S,variables:[{name:"scales",type:"int",arrayLength:o.scales.length,data:o.scales.map(A=>Math.ceil(A))}]})};n.validateInputs=(f,l)=>{if(!f||l.opset<9&&f.length!==1||l.opset>=9&&l.opset<11&&f.length!==2||l.opset>=11&&f.length<2)throw new Error("invalid inputs.");if(l.scales.length>0&&f[0].dims.length!==l.scales.length)throw new Error("Invalid input shape.");if(f[0].type==="string")throw new Error("Invalid input tensor types.")},n.scalesValidation=(f,l,o)=>{if(o){for(const t of f)if(t<=0)throw new Error("Scale value should be greater than 0.")}else for(const t of f)if(t<1)throw new Error("Scale value should be greater than or equal to 1.");if(!(l!=="linear"&&l!=="cubic"||f.length===2||f.length===4&&f[0]===1&&f[1]===1))throw new Error(`'Linear' mode and 'Cubic' mode only support 2-D inputs ('Bilinear', 'Bicubic') or 4-D inputs with the corresponding outermost 2 scale values being 1 in the ${o?"Resize":"Upsample"} opeartor.`)}},1958:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.ProgramManager=void 0;const u=a(1670),c=a(6231),p=a(8879),s=a(5060);n.ProgramManager=class{constructor(h,f,l){this.profiler=h,this.glContext=f,this.textureLayoutStrategy=l,this.repo=new Map,this.attributesBound=!1}getArtifact(h){return this.repo.get(h)}setArtifact(h,f){this.repo.set(h,f)}run(h,f,l){var o;this.profiler.event("op",`ProgramManager.run ${(o=h.programInfo.name)!==null&&o!==void 0?o:"unknown kernel"}`,()=>{var t;const e=this.glContext.gl,r=h.program;e.useProgram(r);try{this.bindOutput(l),this.attributesBound||this.bindAttributes(h.attribLocations),this.bindUniforms(h.uniformLocations,(t=h.programInfo.variables)!==null&&t!==void 0?t:[],f)}catch(i){throw c.Logger.error("ProgramManager",h.programInfo.shaderSource),i}this.profiler.event("backend","GlContext.draw()",()=>{this.glContext.draw()})},this.glContext)}dispose(){this.vertexShader&&this.glContext.deleteShader(this.vertexShader),this.repo.forEach(h=>this.glContext.deleteProgram(h.program))}build(h,f,l){return this.profiler.event("backend","ProgramManager.build",()=>{const o=new p.GlslPreprocessor(this.glContext,h,f,l),t=o.preprocess(),e=this.compile(t);return{programInfo:h,program:e,uniformLocations:this.getUniformLocations(e,o.context.programInfo.inputNames,o.context.programInfo.variables),attribLocations:this.getAttribLocations(e)}})}compile(h){if(!this.vertexShader){c.Logger.verbose("ProrgramManager","Compiling and caching Vertex shader for the first time");const o=(0,s.getVertexShaderSource)(this.glContext.version);this.vertexShader=this.glContext.compileShader(o,this.glContext.gl.VERTEX_SHADER)}u.env.debug&&c.Logger.verbose("ProrgramManager",`FragShader: -${h} -`);const f=this.glContext.compileShader(h,this.glContext.gl.FRAGMENT_SHADER),l=this.glContext.createProgram(this.vertexShader,f);return this.glContext.deleteShader(f),l}bindOutput(h){const f=h.width,l=h.height;c.Logger.verbose("ProrgramManager",`Binding output texture to Framebuffer: w/h=${f}/${l}, shape=${h.shape}, type=${h.tensor.type}`),this.glContext.attachFramebuffer(h.texture,f,l)}bindAttributes(h){const f=h.position,l=h.textureCoord;this.glContext.setVertexAttributes(f,l),this.attributesBound=!0}bindUniforms(h,f,l){var o;const t=this.glContext.gl;let e=0;for(const{name:r,type:i,location:d,arrayLength:g}of h){const m=(o=f.find(b=>b.name===r))===null||o===void 0?void 0:o.data;if(i!=="sampler2D"&&!m)throw new Error(`variable '${r}' does not have data defined in program info`);switch(i){case"sampler2D":this.bindTexture(l[e],d,e),e++;break;case"float":g?t.uniform1fv(d,m):t.uniform1f(d,m);break;case"int":g?t.uniform1iv(d,m):t.uniform1i(d,m);break;default:throw new Error(`Uniform not implemented: ${i}`)}}}bindTexture(h,f,l){this.glContext.bindTextureToUniform(h.texture,l,f)}getAttribLocations(h){return{position:this.getAttribLocation(h,"position"),textureCoord:this.getAttribLocation(h,"textureCoord")}}getUniformLocations(h,f,l){const o=[];if(f)for(const t of f)o.push({name:t,type:"sampler2D",location:this.getUniformLocation(h,t)});if(l)for(const t of l)o.push(Object.assign(Object.assign({},t),{location:this.getUniformLocation(h,t.name)}));return o}getUniformLocation(h,f){const l=this.glContext.gl.getUniformLocation(h,f);if(l===null)throw new Error(`Uniform ${f} not found.`);return l}getAttribLocation(h,f){return this.glContext.gl.getAttribLocation(h,f)}}},6416:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.WebGLSessionHandler=void 0;const u=a(6231),c=a(1047),p=a(8316),s=a(1640),h=a(1958),f=a(7859),l=a(5702);n.WebGLSessionHandler=class{constructor(o,t){this.backend=o,this.context=t,this.layoutStrategy=new f.PreferLogicalStrategy(o.glContext.maxTextureSize),this.programManager=new h.ProgramManager(this.context.profiler,o.glContext,this.layoutStrategy),this.textureManager=new l.TextureManager(o.glContext,this.layoutStrategy,this.context.profiler,{reuseTextures:o.textureCacheMode==="full"}),this.packedTextureDataCache=new Map,this.unpackedTextureDataCache=new Map,this.pack=o.pack,this.pack2unpackMap=new Map,this.unpack2packMap=new Map}createInferenceHandler(){return new p.WebGLInferenceHandler(this)}onGraphInitialized(o){const t=o.getValues().filter(e=>e.from===-1&&e.tensor).map(e=>e.tensor.dataId);this.initializers=new Set(t)}isInitializer(o){return!!this.initializers&&this.initializers.has(o)}addInitializer(o){this.initializers.add(o)}getTextureData(o,t){return t?this.packedTextureDataCache.get(o):this.unpackedTextureDataCache.get(o)}setTextureData(o,t,e=!1){u.Logger.verbose("WebGLSessionHandler","Storing Texture data in cache"),e?this.packedTextureDataCache.set(o,t):this.unpackedTextureDataCache.set(o,t)}dispose(){this.programManager.dispose(),this.textureManager.clearActiveTextures(),this.packedTextureDataCache.forEach(o=>this.textureManager.releaseTexture(o,!0)),this.packedTextureDataCache=new Map,this.unpackedTextureDataCache.forEach(o=>this.textureManager.releaseTexture(o,!0)),this.unpackedTextureDataCache=new Map}resolve(o,t,e){const r=(0,c.resolveOperator)(o,t,s.WEBGL_OP_RESOLVE_RULES);return{impl:r.opImpl,context:r.opInit?r.opInit(o,e):o}}}},7769:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.Uint8DataEncoder=n.RGBAFloatDataEncoder=n.RedFloat32DataEncoder=void 0;const u=a(6231);n.RedFloat32DataEncoder=class{constructor(c,p=1){if(p===1)this.internalFormat=c.R32F,this.format=c.RED,this.textureType=c.FLOAT,this.channelSize=p;else{if(p!==4)throw new Error(`Invalid number of channels: ${p}`);this.internalFormat=c.RGBA32F,this.format=c.RGBA,this.textureType=c.FLOAT,this.channelSize=p}}encode(c,p){let s,h;return c.constructor!==Float32Array&&(u.Logger.warning("Encoder","data was not of type Float32; creating new Float32Array"),h=new Float32Array(c)),p*this.channelSize>c.length?(u.Logger.warning("Encoder","Source data too small. Allocating larger array"),h=c,s=this.allocate(p*this.channelSize),h.forEach((f,l)=>s[l]=f)):(h=c,s=h),s}allocate(c){return new Float32Array(4*c)}decode(c,p){return this.channelSize===1?c.filter((s,h)=>h%4==0).subarray(0,p):c.subarray(0,p)}},n.RGBAFloatDataEncoder=class{constructor(c,p=1,s){if(p!==1&&p!==4)throw new Error(`Invalid number of channels: ${p}`);this.internalFormat=c.RGBA,this.format=c.RGBA,this.channelSize=p,this.textureType=s||c.FLOAT}encode(c,p){let s=c;return this.channelSize===1&&(u.Logger.verbose("Encoder","Exploding into a larger array"),s=this.allocate(p),c.forEach((h,f)=>s[4*f]=h)),s}allocate(c){return new Float32Array(4*c)}decode(c,p){return this.channelSize===1?c.filter((s,h)=>h%4==0).subarray(0,p):c.subarray(0,p)}},n.Uint8DataEncoder=class{constructor(c,p=1){if(this.channelSize=4,p===1)this.internalFormat=c.ALPHA,this.format=c.ALPHA,this.textureType=c.UNSIGNED_BYTE,this.channelSize=p;else{if(p!==4)throw new Error(`Invalid number of channels: ${p}`);this.internalFormat=c.RGBA,this.format=c.RGBA,this.textureType=c.UNSIGNED_BYTE,this.channelSize=p}}encode(c,p){return new Uint8Array(c.buffer,c.byteOffset,c.byteLength)}allocate(c){return new Uint8Array(c*this.channelSize)}decode(c,p){if(c instanceof Uint8Array)return c.subarray(0,p);throw new Error(`Invalid array type: ${c.constructor}`)}}},7859:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.getBatchDim=n.sizeToSquarishShape=n.getRowsCols=n.sizeFromShape=n.isInt=n.parseAxisParam=n.squeezeShape=n.PreferLogicalStrategy=n.AlwaysKeepOriginalSizeStrategy=void 0;const u=a(6231),c=a(2517);function p(o,t){const e=[],r=[],i=t!=null&&Array.isArray(t)&&t.length===0,d=t==null||i?null:s(t,o).sort();let g=0;for(let m=0;m<o.length;++m){if(d!=null){if(d[g]===m&&o[m]!==1)throw new Error(`Can't squeeze axis ${m} since its dim '${o[m]}' is not 1`);(d[g]==null||d[g]>m)&&o[m]===1&&(e.push(o[m]),r.push(m)),d[g]<=m&&g++}o[m]!==1&&(e.push(o[m]),r.push(m))}return{newShape:e,keptDims:r}}function s(o,t){const e=t.length;return o=o==null?t.map((r,i)=>i):[].concat(o),(0,c.assert)(o.every(r=>r>=-e&&r<e),()=>`All values in axis param must be in range [-${e}, ${e}) but got axis ${o}`),(0,c.assert)(o.every(h),()=>`All values in axis param must be integers but got axis ${o}`),o.map(r=>r<0?e+r:r)}function h(o){return o%1==0}function f(o){if(o.length===0)return 1;let t=o[0];for(let e=1;e<o.length;e++)t*=o[e];return t}function l(o){const t=Math.ceil(Math.sqrt(o));return[t,Math.ceil(o/t)]}n.AlwaysKeepOriginalSizeStrategy=class{constructor(o){this.maxTextureSize=o}computeTextureWH(o,t){if(o.length===0)return[1,1];const e=this.maxTextureSize;if(t&&t.breakAxis!==void 0){const d=t.breakAxis>=o.length?1:o.slice(t.breakAxis).reduce((m,b)=>m*b),g=t.breakAxis<=0?1:o.slice(0,t.breakAxis).reduce((m,b)=>m*b);if(!(d>e||g>e))return[d,g];u.Logger.verbose("TextureLayout",`Given width/height preferences were unattainable: shape:${o}, breakAxis:${t.breakAxis}`)}const r=o.reduce((d,g)=>d*g);let i=Math.floor(Math.sqrt(r));for(;i<e&&i<r&&r%i!=0;i++);if(i>=e||r%i!=0)throw new Error(`The given dimensions are outside this GPU's boundaries: ${o}`);return[i,r/i]}},n.PreferLogicalStrategy=class{constructor(o){this.maxTextureSize=o}computeTextureWH(o,t){const e=this.computeTexture(o,t);return t&&t.isPacked&&(e[0]/=2,e[1]/=2),t&&t.reverseWH?[e[1],e[0]]:e}computeTexture(o,t){const e=t&&t.isPacked;if(o.length===0)return e?[2,2]:[1,1];let r=this.maxTextureSize;if(t&&t.breakAxis!==void 0){const g=t.breakAxis>=o.length?1:o.slice(t.breakAxis).reduce((b,_)=>b*_),m=t.breakAxis<=0?1:o.slice(0,t.breakAxis).reduce((b,_)=>b*_);if(!(g>r||m>r))return[g,m];u.Logger.verbose("TextureLayout",`Given width/height preferences were unattainable: shape:${o}, breakAxis:${t.breakAxis}`)}let i=o.slice(0);e&&(r*=2,i=i.map((g,m)=>m>=i.length-2?i[m]%2==0?i[m]:i[m]+1:i[m]),i.length===1&&(i=[2,i[0]])),i.length!==2&&(i=p(i).newShape);const d=f(i);return i.length<=1&&d<=r?[1,d]:i.length===2&&i[0]<=r&&i[1]<=r?i:i.length===3&&i[0]*i[1]<=r&&i[2]<=r?[i[0]*i[1],i[2]]:i.length===3&&i[0]<=r&&i[1]*i[2]<=r?[i[0],i[1]*i[2]]:i.length===4&&i[0]*i[1]*i[2]<=r&&i[3]<=r?[i[0]*i[1]*i[2],i[3]]:i.length===4&&i[0]<=r&&i[1]*i[2]*i[3]<=r?[i[0],i[1]*i[2]*i[3]]:e?l(d/4).map(g=>2*g):l(d)}},n.squeezeShape=p,n.parseAxisParam=s,n.isInt=h,n.sizeFromShape=f,n.getRowsCols=function(o){if(o.length===0)throw Error("Cannot get rows and columns of an empty shape array.");return[o.length>1?o[o.length-2]:1,o[o.length-1]]},n.sizeToSquarishShape=l,n.getBatchDim=function(o,t=2){return f(o.slice(0,o.length-t))}},4057:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createTextureLayoutFromShape=n.calculateTextureWidthAndHeight=n.createTextureLayoutFromTextureType=void 0;const u=a(2517),c=a(2039);n.createTextureLayoutFromTextureType=(p,s,h)=>{const f=h===c.TextureType.unpacked||h===c.TextureType.unpackedReversed?1:4,l=h===c.TextureType.packed,o=h===c.TextureType.unpackedReversed||h===c.TextureType.packed,t=h===c.TextureType.packedLastDimension?s.length-1:void 0,e=h===c.TextureType.packedLastDimension?s.map((r,i)=>i===s.length-1?4*r:r):void 0;return(0,n.createTextureLayoutFromShape)(p,s,f,e,{isPacked:l,reverseWH:o,breakAxis:t})},n.calculateTextureWidthAndHeight=(p,s,h)=>{const f=(0,n.createTextureLayoutFromTextureType)(p,s,h);return[f.width,f.height]},n.createTextureLayoutFromShape=(p,s,h=1,f,l)=>{const o=!(!l||!l.isPacked),[t,e]=p.computeTextureWH(o&&f||s,l),r=s.length;let i=s.slice(0);if(r===0&&(i=[1]),h===1)f=s;else if(o){if(h!==4)throw new Error("a packed texture must be 4-channel");f=s,r>0&&(i[r-1]=Math.ceil(i[r-1]/2)),r>1&&(i[r-2]=Math.ceil(i[r-2]/2))}else if(!f)throw new Error("Unpacked shape is needed when using channels > 1");return{width:t,height:e,channels:h,isPacked:o,shape:i,strides:u.ShapeUtil.computeStrides(i),unpackedShape:f,reversedWH:l&&l.reverseWH}}},5702:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.TextureManager=void 0;const u=a(6231);n.TextureManager=class{constructor(c,p,s,h){this.glContext=c,this.layoutStrategy=p,this.profiler=s,this.config=h,this.pendingRead=new Map,h.reuseTextures&&(this.inUseTextures=new Map,this.idleTextures=new Map,this.textureLookup=new Map)}createTextureFromLayout(c,p,s,h){const f=this.toEncoderType(c),l=this.glContext.getEncoder(f,p.channels||1,h);if(p.isPacked&&h===1)throw new Error("not implemented");const o=p.width,t=p.height;let e,r;if(this.config.reuseTextures){e=`${o}x${t}_${l.format}_${l.internalFormat}_${l.textureType}`,r=this.inUseTextures.get(e),r||(r=[],this.inUseTextures.set(e,r));const d=this.idleTextures.get(e);if(d&&d.length>0){const g=d.pop();return r.push(g),h===1&&this.glContext.updateTexture(g,o,t,l,this.toTextureData(c,s)),g}}u.Logger.verbose("TextureManager",`Creating new texture of size ${p.width}x${p.height}`);const i=this.glContext.allocateTexture(o,t,l,this.toTextureData(c,s));return this.config.reuseTextures&&(r.push(i),this.textureLookup.set(i,e)),i}readTexture(c,p,s){return s||(s=1),this.profiler.event("backend","TextureManager.readTexture",()=>{const h=c.shape.reduce((l,o)=>l*o)*s,f=this.glContext.readTexture(c.texture,c.width,c.height,h,this.toEncoderType(p),s);return this.toTensorData(p,f)})}async readTextureAsync(c,p,s){const h=c.tensor.dataId;if(s||(s=1),this.pendingRead.has(h)){const f=this.pendingRead.get(h);return new Promise(l=>f==null?void 0:f.push(l))}return this.profiler.event("backend","TextureManager.readTextureAsync",async()=>{this.pendingRead.set(h,[]);const f=c.shape.reduce((e,r)=>e*r)*s;await this.glContext.createAndWaitForFence();const l=this.glContext.readTexture(c.texture,c.width,c.height,f,this.toEncoderType(p),s),o=this.toTensorData(p,l),t=this.pendingRead.get(h);return this.pendingRead.delete(h),t==null||t.forEach(e=>e(o)),o})}readUint8TextureAsFloat(c){return this.profiler.event("backend","TextureManager.readUint8TextureAsFloat",()=>{const p=c.shape.reduce((h,f)=>h*f),s=this.glContext.readTexture(c.texture,c.width,c.height,4*p,"byte",4);return new Float32Array(s.buffer,s.byteOffset,p)})}releaseTexture(c,p){let s;if(this.config.reuseTextures&&(s=this.textureLookup.get(c.texture),s)){p&&this.textureLookup.delete(s);const h=this.inUseTextures.get(s);if(h){const f=h.indexOf(c.texture);if(f!==-1){h.splice(f,1);let l=this.idleTextures.get(s);l||(l=[],this.idleTextures.set(s,l)),l.push(c.texture)}}}s&&!p||(u.Logger.verbose("TextureManager",`Deleting texture of size ${c.width}x${c.height}`),this.glContext.deleteTexture(c.texture))}toTensorData(c,p){switch(c){case"int16":return p instanceof Int16Array?p:Int16Array.from(p);case"int32":return p instanceof Int32Array?p:Int32Array.from(p);case"int8":return p instanceof Int8Array?p:Int8Array.from(p);case"uint16":return p instanceof Uint16Array?p:Uint16Array.from(p);case"uint32":return p instanceof Uint32Array?p:Uint32Array.from(p);case"uint8":case"bool":return p instanceof Uint8Array?p:Uint8Array.from(p);case"float32":return p instanceof Float32Array?p:Float32Array.from(p);case"float64":return p instanceof Float64Array?p:Float64Array.from(p);default:throw new Error(`TensorData type ${c} is not supported`)}}toTextureData(c,p){if(p)return p instanceof Float32Array?p:new Float32Array(p)}toEncoderType(c){return"float"}clearActiveTextures(){this.glContext.clearActiveTextures()}}},2039:(y,n)=>{var a;Object.defineProperty(n,"__esModule",{value:!0}),n.TextureType=void 0,(a=n.TextureType||(n.TextureType={}))[a.unpacked=0]="unpacked",a[a.unpackedReversed=1]="unpackedReversed",a[a.packed=2]="packed",a[a.downloadUint8AsFloat=3]="downloadUint8AsFloat",a[a.packedLastDimension=4]="packedLastDimension"},9390:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.getGlChannels=n.getCoordsDataType=n.getSqueezedParams=n.squeezeInputShape=n.generateShaderFuncNameFromInputSamplerNameAtOutCoords=n.generateShaderFuncNameFromInputSamplerName=n.repeatedTry=n.getPackedShape=void 0;const u=a(2517);n.getPackedShape=function(c){const p=c.length;return c.slice(0,p-1).concat(c[p-1]/4)},n.repeatedTry=async function(c,p=h=>0,s){return new Promise((h,f)=>{let l=0;const o=()=>{if(c())return void h();l++;const t=p(l);s!=null&&l>=s?f():setTimeout(o,t)};o()})},n.generateShaderFuncNameFromInputSamplerName=function(c){return(0,u.assert)(c!==void 0&&c.length!==0,()=>"empty string found for sampler name"),"get"+c.charAt(0).toUpperCase()+c.slice(1)},n.generateShaderFuncNameFromInputSamplerNameAtOutCoords=function(c){return(0,u.assert)(c!==void 0&&c.length!==0,()=>"empty string found for sampler name"),"get"+c.charAt(0).toUpperCase()+c.slice(1)+"AtOutCoords"},n.squeezeInputShape=function(c,p){let s=JSON.parse(JSON.stringify(c));return s=p,s},n.getSqueezedParams=function(c,p){return p.map(s=>c[s]).join(", ")},n.getCoordsDataType=function(c){if(c<=1)return"int";if(c===2)return"ivec2";if(c===3)return"ivec3";if(c===4)return"ivec4";if(c===5)return"ivec5";if(c===6)return"ivec6";throw Error(`GPU for rank ${c} is not yet supported`)},n.getGlChannels=function(c=6){return["x","y","z","w","u","v"].slice(0,c)}},7305:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.createNewWebGLContext=n.createWebGLContext=void 0;const u=a(6231),c=a(1713),p={};function s(h){const f=function(){if(typeof document>"u"){if(typeof OffscreenCanvas>"u")throw new TypeError("failed to create canvas: OffscreenCanvas is not supported");return new OffscreenCanvas(1,1)}const t=document.createElement("canvas");return t.width=1,t.height=1,t}();let l;const o={alpha:!1,depth:!1,antialias:!1,stencil:!1,preserveDrawingBuffer:!1,premultipliedAlpha:!1,failIfMajorPerformanceCaveat:!1};if((!h||h==="webgl2")&&(l=f.getContext("webgl2",o),l))try{return new c.WebGLContext(l,2)}catch(t){u.Logger.warning("GlContextFactory",`failed to create WebGLContext using contextId 'webgl2'. Error: ${t}`)}if((!h||h==="webgl")&&(l=f.getContext("webgl",o)||f.getContext("experimental-webgl",o),l))try{return new c.WebGLContext(l,1)}catch(t){u.Logger.warning("GlContextFactory",`failed to create WebGLContext using contextId 'webgl' or 'experimental-webgl'. Error: ${t}`)}throw new Error("WebGL is not supported")}n.createWebGLContext=function h(f){let l;f&&f!=="webgl2"||!("webgl2"in p)?f&&f!=="webgl"||!("webgl"in p)||(l=p.webgl):l=p.webgl2,l=l||s(f),f=f||l.version===1?"webgl":"webgl2";const o=l.gl;return p[f]=l,o.isContextLost()?(delete p[f],h(f)):(o.disable(o.DEPTH_TEST),o.disable(o.STENCIL_TEST),o.disable(o.BLEND),o.disable(o.DITHER),o.disable(o.POLYGON_OFFSET_FILL),o.disable(o.SAMPLE_COVERAGE),o.enable(o.SCISSOR_TEST),o.enable(o.CULL_FACE),o.cullFace(o.BACK),l)},n.createNewWebGLContext=s},1713:function(y,n,a){var u=this&&this.__createBinding||(Object.create?function(o,t,e,r){r===void 0&&(r=e);var i=Object.getOwnPropertyDescriptor(t,e);i&&!("get"in i?!t.__esModule:i.writable||i.configurable)||(i={enumerable:!0,get:function(){return t[e]}}),Object.defineProperty(o,r,i)}:function(o,t,e,r){r===void 0&&(r=e),o[r]=t[e]}),c=this&&this.__setModuleDefault||(Object.create?function(o,t){Object.defineProperty(o,"default",{enumerable:!0,value:t})}:function(o,t){o.default=t}),p=this&&this.__importStar||function(o){if(o&&o.__esModule)return o;var t={};if(o!=null)for(var e in o)e!=="default"&&Object.prototype.hasOwnProperty.call(o,e)&&u(t,o,e);return c(t,o),t};Object.defineProperty(n,"__esModule",{value:!0}),n.WebGLContext=n.linearSearchLastTrue=void 0;const s=a(1670),h=p(a(7769)),f=a(9390);function l(o){let t=0;for(;t<o.length&&o[t]();++t);return t-1}n.linearSearchLastTrue=l,n.WebGLContext=class{constructor(o,t){this.frameBufferBound=!1,this.itemsToPoll=[],this.gl=o,this.version=t,this.getExtensions(),this.vertexbuffer=this.createVertexbuffer(),this.framebuffer=this.createFramebuffer(),this.queryVitalParameters()}allocateTexture(o,t,e,r){const i=this.gl,d=i.createTexture();i.bindTexture(i.TEXTURE_2D,d),i.texParameteri(i.TEXTURE_2D,i.TEXTURE_MIN_FILTER,i.NEAREST),i.texParameteri(i.TEXTURE_2D,i.TEXTURE_MAG_FILTER,i.NEAREST),i.texParameteri(i.TEXTURE_2D,i.TEXTURE_WRAP_S,i.CLAMP_TO_EDGE),i.texParameteri(i.TEXTURE_2D,i.TEXTURE_WRAP_T,i.CLAMP_TO_EDGE);const g=r?e.encode(r,o*t):null;return i.texImage2D(i.TEXTURE_2D,0,e.internalFormat,o,t,0,e.format,e.textureType,g),this.checkError(),d}updateTexture(o,t,e,r,i){const d=this.gl;d.bindTexture(d.TEXTURE_2D,o);const g=r.encode(i,t*e);d.texSubImage2D(d.TEXTURE_2D,0,0,0,t,e,r.format,r.textureType,g),this.checkError()}attachFramebuffer(o,t,e){const r=this.gl;r.bindTexture(r.TEXTURE_2D,o),r.bindFramebuffer(r.FRAMEBUFFER,this.framebuffer),r.framebufferTexture2D(r.FRAMEBUFFER,r.COLOR_ATTACHMENT0,r.TEXTURE_2D,o,0),this.checkError(),r.viewport(0,0,t,e),r.scissor(0,0,t,e)}readTexture(o,t,e,r,i,d){const g=this.gl;d||(d=1),this.frameBufferBound||this.attachFramebuffer(o,t,e);const m=this.getEncoder(i,d),b=m.allocate(t*e);return g.bindTexture(g.TEXTURE_2D,o),g.framebufferTexture2D(g.FRAMEBUFFER,g.COLOR_ATTACHMENT0,g.TEXTURE_2D,o,0),g.readPixels(0,0,t,e,g.RGBA,m.textureType,b),this.checkError(),m.decode(b,r)}isFramebufferReady(){return!0}getActiveTexture(){const o=this.gl;return"TEXTURE"+(o.getParameter(this.gl.ACTIVE_TEXTURE)-o.TEXTURE0)}getTextureBinding(){return this.gl.getParameter(this.gl.TEXTURE_BINDING_2D)}getFramebufferBinding(){return this.gl.getParameter(this.gl.FRAMEBUFFER_BINDING)}setVertexAttributes(o,t){const e=this.gl;e.vertexAttribPointer(o,3,e.FLOAT,!1,20,0),e.enableVertexAttribArray(o),t!==-1&&(e.vertexAttribPointer(t,2,e.FLOAT,!1,20,12),e.enableVertexAttribArray(t)),this.checkError()}createProgram(o,t){const e=this.gl,r=e.createProgram();return e.attachShader(r,o),e.attachShader(r,t),e.linkProgram(r),r}compileShader(o,t){const e=this.gl,r=e.createShader(t);if(!r)throw new Error(`createShader() returned null with type ${t}`);if(e.shaderSource(r,o),e.compileShader(r),e.getShaderParameter(r,e.COMPILE_STATUS)===!1)throw new Error(`Failed to compile shader: ${e.getShaderInfoLog(r)} -Shader source: -${o}`);return r}deleteShader(o){this.gl.deleteShader(o)}bindTextureToUniform(o,t,e){const r=this.gl;r.activeTexture(r.TEXTURE0+t),this.checkError(),r.bindTexture(r.TEXTURE_2D,o),this.checkError(),r.uniform1i(e,t),this.checkError()}draw(){this.gl.drawArrays(this.gl.TRIANGLE_STRIP,0,4),this.checkError()}checkError(){if(s.env.debug){const o=this.gl,t=o.getError();let e="";switch(t){case o.NO_ERROR:return;case o.INVALID_ENUM:e="INVALID_ENUM";break;case o.INVALID_VALUE:e="INVALID_VALUE";break;case o.INVALID_OPERATION:e="INVALID_OPERATION";break;case o.INVALID_FRAMEBUFFER_OPERATION:e="INVALID_FRAMEBUFFER_OPERATION";break;case o.OUT_OF_MEMORY:e="OUT_OF_MEMORY";break;case o.CONTEXT_LOST_WEBGL:e="CONTEXT_LOST_WEBGL";break;default:e=`Unknown WebGL Error: ${t.toString(16)}`}throw new Error(e)}}deleteTexture(o){this.gl.deleteTexture(o)}deleteProgram(o){this.gl.deleteProgram(o)}getEncoder(o,t,e=0){if(this.version===2)return new h.RedFloat32DataEncoder(this.gl,t);switch(o){case"float":return e===1||this.isRenderFloat32Supported?new h.RGBAFloatDataEncoder(this.gl,t):new h.RGBAFloatDataEncoder(this.gl,t,this.textureHalfFloatExtension.HALF_FLOAT_OES);case"int":throw new Error("not implemented");case"byte":return new h.Uint8DataEncoder(this.gl,t);default:throw new Error(`Invalid dataType: ${o}`)}}clearActiveTextures(){const o=this.gl;for(let t=0;t<this.maxTextureImageUnits;++t)o.activeTexture(o.TEXTURE0+t),o.bindTexture(o.TEXTURE_2D,null)}dispose(){if(this.disposed)return;const o=this.gl;o.bindFramebuffer(o.FRAMEBUFFER,null),o.deleteFramebuffer(this.framebuffer),o.bindBuffer(o.ARRAY_BUFFER,null),o.deleteBuffer(this.vertexbuffer),o.bindBuffer(o.ELEMENT_ARRAY_BUFFER,null),o.finish(),this.disposed=!0}createDefaultGeometry(){return new Float32Array([-1,1,0,0,1,-1,-1,0,0,0,1,1,0,1,1,1,-1,0,1,0])}createVertexbuffer(){const o=this.gl,t=o.createBuffer();if(!t)throw new Error("createBuffer() returned null");const e=this.createDefaultGeometry();return o.bindBuffer(o.ARRAY_BUFFER,t),o.bufferData(o.ARRAY_BUFFER,e,o.STATIC_DRAW),this.checkError(),t}createFramebuffer(){const o=this.gl.createFramebuffer();if(!o)throw new Error("createFramebuffer returned null");return o}queryVitalParameters(){const o=this.gl;if(this.isFloatTextureAttachableToFrameBuffer=this.checkFloatTextureAttachableToFrameBuffer(),this.isRenderFloat32Supported=this.checkRenderFloat32(),this.isFloat32DownloadSupported=this.checkFloat32Download(),this.version===1&&!this.textureHalfFloatExtension&&!this.isRenderFloat32Supported)throw new Error("both float32 and float16 TextureType are not supported");this.isBlendSupported=!this.isRenderFloat32Supported||this.checkFloat32Blend(),this.maxTextureSize=o.getParameter(o.MAX_TEXTURE_SIZE),this.maxTextureImageUnits=o.getParameter(o.MAX_TEXTURE_IMAGE_UNITS),this.version}getExtensions(){this.version===2?(this.colorBufferFloatExtension=this.gl.getExtension("EXT_color_buffer_float"),this.disjointTimerQueryWebgl2Extension=this.gl.getExtension("EXT_disjoint_timer_query_webgl2")):(this.textureFloatExtension=this.gl.getExtension("OES_texture_float"),this.textureHalfFloatExtension=this.gl.getExtension("OES_texture_half_float"))}checkFloatTextureAttachableToFrameBuffer(){const o=this.gl,t=o.createTexture();o.bindTexture(o.TEXTURE_2D,t);const e=this.version===2?o.RGBA32F:o.RGBA;o.texImage2D(o.TEXTURE_2D,0,e,1,1,0,o.RGBA,o.FLOAT,null);const r=o.createFramebuffer();o.bindFramebuffer(o.FRAMEBUFFER,r),o.framebufferTexture2D(o.FRAMEBUFFER,o.COLOR_ATTACHMENT0,o.TEXTURE_2D,t,0);const i=o.checkFramebufferStatus(o.FRAMEBUFFER)===o.FRAMEBUFFER_COMPLETE;return o.bindTexture(o.TEXTURE_2D,null),o.bindFramebuffer(o.FRAMEBUFFER,null),o.deleteTexture(t),o.deleteFramebuffer(r),i}checkRenderFloat32(){if(this.version===2){if(!this.colorBufferFloatExtension)return!1}else if(!this.textureFloatExtension)return!1;return this.isFloatTextureAttachableToFrameBuffer}checkFloat32Download(){if(this.version===2){if(!this.colorBufferFloatExtension)return!1}else if(!this.textureFloatExtension||!this.gl.getExtension("WEBGL_color_buffer_float"))return!1;return this.isFloatTextureAttachableToFrameBuffer}checkFloat32Blend(){const o=this.gl;let t,e,r,i,d;try{t=o.createTexture(),e=o.createFramebuffer(),o.bindTexture(o.TEXTURE_2D,t);const g=this.version===2?o.RGBA32F:o.RGBA;return o.texImage2D(o.TEXTURE_2D,0,g,1,1,0,o.RGBA,o.FLOAT,null),o.bindFramebuffer(o.FRAMEBUFFER,e),o.framebufferTexture2D(o.FRAMEBUFFER,o.COLOR_ATTACHMENT0,o.TEXTURE_2D,t,0),o.enable(o.BLEND),r=o.createShader(o.VERTEX_SHADER),!!r&&(o.shaderSource(r,"void main(){}"),o.compileShader(r),i=o.createShader(o.FRAGMENT_SHADER),!!i&&(o.shaderSource(i,"precision highp float;void main(){gl_FragColor=vec4(0.5);}"),o.compileShader(i),d=o.createProgram(),!!d&&(o.attachShader(d,r),o.attachShader(d,i),o.linkProgram(d),o.useProgram(d),o.drawArrays(o.POINTS,0,1),o.getError()===o.NO_ERROR)))}finally{o.disable(o.BLEND),d&&o.deleteProgram(d),r&&o.deleteShader(r),i&&o.deleteShader(i),e&&(o.bindFramebuffer(o.FRAMEBUFFER,null),o.deleteFramebuffer(e)),t&&(o.bindTexture(o.TEXTURE_2D,null),o.deleteTexture(t))}}beginTimer(){if(this.version===2&&this.disjointTimerQueryWebgl2Extension){const o=this.gl,t=this.disjointTimerQueryWebgl2Extension,e=o.createQuery();return o.beginQuery(t.TIME_ELAPSED_EXT,e),e}throw new Error("WebGL1 profiling currently not supported.")}endTimer(){if(this.version!==2||!this.disjointTimerQueryWebgl2Extension)throw new Error("WebGL1 profiling currently not supported");{const o=this.gl,t=this.disjointTimerQueryWebgl2Extension;o.endQuery(t.TIME_ELAPSED_EXT)}}isTimerResultAvailable(o){let t=!1,e=!1;if(this.version!==2||!this.disjointTimerQueryWebgl2Extension)throw new Error("WebGL1 profiling currently not supported");{const r=this.gl,i=this.disjointTimerQueryWebgl2Extension;t=r.getQueryParameter(o,r.QUERY_RESULT_AVAILABLE),e=r.getParameter(i.GPU_DISJOINT_EXT)}return t&&!e}getTimerResult(o){let t=0;if(this.version!==2)throw new Error("WebGL1 profiling currently not supported");{const e=this.gl;t=e.getQueryParameter(o,e.QUERY_RESULT),e.deleteQuery(o)}return t/1e6}async waitForQueryAndGetTime(o){return await(0,f.repeatedTry)(()=>this.isTimerResultAvailable(o)),this.getTimerResult(o)}async createAndWaitForFence(){const o=this.createFence(this.gl);return this.pollFence(o)}createFence(o){let t;const e=o,r=e.fenceSync(e.SYNC_GPU_COMMANDS_COMPLETE,0);return o.flush(),t=r===null?()=>!0:()=>{const i=e.clientWaitSync(r,0,0);return i===e.ALREADY_SIGNALED||i===e.CONDITION_SATISFIED},{query:r,isFencePassed:t}}async pollFence(o){return new Promise(t=>{this.addItemToPoll(()=>o.isFencePassed(),()=>t())})}pollItems(){const o=l(this.itemsToPoll.map(t=>t.isDoneFn));for(let t=0;t<=o;++t){const{resolveFn:e}=this.itemsToPoll[t];e()}this.itemsToPoll=this.itemsToPoll.slice(o+1)}async addItemToPoll(o,t){this.itemsToPoll.push({isDoneFn:o,resolveFn:t}),this.itemsToPoll.length>1||await(0,f.repeatedTry)(()=>(this.pollItems(),this.itemsToPoll.length===0))}}},1036:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.ExecutionPlan=void 0;const u=a(6231);class c{constructor(s,h){this.op=s,this.node=h}}n.ExecutionPlan=class{constructor(p,s,h){this.graph=p,this.profiler=h,this.initialize(s)}initialize(p){this.profiler.event("session","ExecutionPlan.initialize",()=>{const s=this.graph.getNodes();if(s.length!==p.length)throw new Error("The size of nodes and OPs do not match.");this._ops=p.map((h,f)=>new c(h,s[f])),this.reset(),this._starter=[],this._ops.forEach((h,f)=>{let l=!0;for(const o of h.node.inputs)if(!this._values[o]&&this.graph.getInputIndices().indexOf(o)===-1){l=!1;break}l&&this._starter.push(f)})})}reset(){this._values=this.graph.getValues().map(p=>p.tensor)}async execute(p,s){return this.profiler.event("session","ExecutionPlan.execute",async()=>{this.reset();const h=p.createInferenceHandler(),f=this.graph.getInputIndices();if(s.length!==f.length)throw new Error(`number of input tensors don't match the number of inputs to the model: actual: ${s.length} expected: ${f.length}`);s.forEach((i,d)=>{const g=f[d];this._values[g]=i});const l=this._starter.slice(0),o=this.graph.getValues(),t=this.graph.getNodes();let e=0;for(;e<l.length;){const i=l[e++],d=this._ops[i],g=d.node.inputs.map(v=>this._values[v]);if(g.indexOf(void 0)!==-1)throw new Error(`unresolved input detected: op: ${d.node}`);const m=g;u.Logger.verbose("ExecPlan",`Runing op:${d.node.name} (${m.map((v,w)=>`'${d.node.inputs[w]}': ${v.type}[${v.dims.join(",")}]`).join(", ")})`);const b=await this.profiler.event("node",d.node.name,async()=>d.op.impl(h,m,d.op.context));if(b.length!==d.node.outputs.length)throw new Error("the size of output does not match model definition.");b.forEach((v,w)=>{const S=d.node.outputs[w];if(this._values[S])throw new Error(`output [${S}] already has value: op:${d.node.name}`);this._values[S]=v});const _=new Set;b.forEach((v,w)=>{const S=d.node.outputs[w];for(const A of o[S].to){const O=t[A];let x=!0;for(const I of O.inputs)if(!this._values[I]){x=!1;break}x&&_.add(A)}}),l.push(..._)}const r=[];for(let i=0;i<this.graph.getOutputIndices().length;i++){const d=this.graph.getOutputIndices()[i],g=this._values[d];if(g===void 0)throw new Error(`required output [${d}] does not have value`);d===0?await g.getData():g.data,r.push(g)}return u.Logger.verbose("ExecPlan","disposing of inferenceHandler"),h.dispose(),r})}}},7070:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.Graph=void 0;const u=a(1446),c=a(7778),p=a(9395),s=a(9162),h=a(2517);var f=p.onnxruntime.experimental.fbs;n.Graph={from:(e,r)=>new t(e,r)};class l{constructor(r){this._from=void 0,this._to=[],this.tensor=void 0,this.type=void 0,r&&(this.type=h.ProtoUtil.tensorValueTypeFromProto(r.type.tensorType))}get from(){return this._from}get to(){return this._to}}class o{constructor(r,i){r instanceof u.onnx.NodeProto?(this.name=r.name,this.opType=r.opType,this.attributes=new c.Attribute(r.attribute)):r instanceof f.Node&&(this.name=i??r.name(),this.opType=r.opType(),this.attributes=new c.Attribute(h.ProtoUtil.tensorAttributesFromORTFormat(r))),this.inputs=[],this.outputs=[],this.executeNode=!0}}class t{constructor(r,i){if(!r)throw new TypeError("graph is empty");this.buildGraph(r),this.transformGraph(i),this.checkIsAcyclic()}getInputIndices(){return this._allInputIndices}getInputNames(){return this._allInputNames}getOutputIndices(){return this._allOutputIndices}getOutputNames(){return this._allOutputNames}getValues(){return this._allData}getNodes(){return this._nodes}buildGraph(r){if(r instanceof u.onnx.GraphProto)this.buildGraphFromOnnxFormat(r);else{if(!(r instanceof f.Graph))throw new TypeError("Graph type is not supported.");this.buildGraphFromOrtFormat(r)}}buildGraphFromOnnxFormat(r){const i=new Map;this._allData=[],this._allInputIndices=[],this._allInputNames=[],this._allOutputIndices=[],this._allOutputNames=[],this._nodes=[];const d=new Map;if(!r.input)throw new Error("missing information in graph: input");const g=[];for(const m of r.input){if(i.has(m.name))throw new Error(`duplicated input name: ${m.name}`);const b=this._allData.push(new l(m))-1;i.set(m.name,b),g.push(m.name)}if(!r.initializer)throw new Error("missing information in graph: initializer");for(const m of r.initializer){let b=i.get(m.name);if(b===void 0){const _=new l;_.type={shape:{dims:h.ProtoUtil.tensorDimsFromProto(m.dims)},tensorType:h.ProtoUtil.tensorDataTypeFromProto(m.dataType)},b=this._allData.push(_)-1,i.set(m.name,b)}this._allData[b]._from=-1,this._allData[b].tensor=s.Tensor.fromProto(m)}for(let m=0;m<this._allData.length;m++)this._allData[m].tensor||(this._allInputIndices.push(m),this._allInputNames.push(g[m]));if(!r.output)throw new Error("missing information in graph: output");for(const m of r.output){if(i.has(m.name))throw new Error(`duplicated output name: ${m.name}`);const b=this._allData.push(new l(m))-1;i.set(m.name,b),this._allOutputIndices.push(b),this._allOutputNames.push(m.name)}if(!r.node)throw new Error("missing information in graph: node");for(const m of r.node){if(!m.name)for(let _=0;;_++){const v=`unnamed_${m.opType}_${_}`;if(!d.has(v)){m.name=v;break}}if(d.has(m.name))throw new Error(`duplicated node name: ${m.name}`);const b=this._nodes.push(new o(m))-1;d.set(m.name,b)}for(let m=0;m<this._nodes.length;m++){const b=this._nodes[m],_=r.node[m];if(!_.output)throw new Error(`missing output for node: ${_.name}`);for(const v of _.output){let w=i.get(v);if(w===void 0&&(w=this._allData.push(new l)-1,i.set(v,w)),b.outputs.push(w),this._allData[w]._from!==void 0)throw new Error(`multiple nodes output to one data value: ${w}`);if(this._allData[w]._from=m,_.opType==="Constant"){if(!_.attribute||_.attribute.length!==1||!_.attribute[0].t)throw new Error("missing attributes or missing tensor value in attributes for this Constant operator");if(!_.output||_.output.length!==1)throw new Error("missing output or incorrect number of outputs for this Constant operator");b.outputs.pop(),b.executeNode=!1,this._allData[w]._from=-1,this._allData[w].tensor=s.Tensor.fromProto(_.attribute[0].t)}}}for(let m=0;m<this._nodes.length;m++){const b=this._nodes[m],_=r.node[m];if(!_.input)throw new Error(`missing input for node: ${_.name}`);for(const v of _.input){const w=i.get(v);if(w===void 0){if(v===""&&_.input.length===3&&_.opType==="Resize")continue;throw new Error(`unrecognized input '${v}' for node: ${_.name}`)}b.inputs.push(w),this._allData[w]._to.push(m)}}return!0}buildGraphFromOrtFormat(r){var i,d,g;const m=new Map;this._allData=[],this._allInputIndices=[],this._allInputNames=[],this._allOutputIndices=[],this._allOutputNames=[],this._nodes=[];const b=new Map,_=[];for(let v=0;v<r.inputsLength();v++){const w=r.inputs(v);if(m.has(w))throw new Error(`duplicated input name: ${w}`);for(let S=0;S<r.nodeArgsLength();S++)if(((i=r.nodeArgs(S))===null||i===void 0?void 0:i.name())===w){const A=new l;if(((g=(d=r.nodeArgs(S))===null||d===void 0?void 0:d.type())===null||g===void 0?void 0:g.valueType())!==f.TypeInfoValue.tensor_type)throw new Error("Unexpected value type for the nodeArg.");const O=r.nodeArgs(S).type().value(new f.TensorTypeAndShape),x=h.ProtoUtil.tensorDataTypeFromProto(O.elemType()),I=O.shape(),N=[];for(let L=0;L<I.dimLength();L++)N.push(h.LongUtil.longToNumber(I.dim(L).value().dimValue()));A.type={shape:{dims:N},tensorType:x};const B=this._allData.push(A)-1;m.set(w,B),_.push(w)}}for(let v=0;v<r.initializersLength();v++){const w=r.initializers(v);let S=m.get(w.name());if(S===void 0){const A=new l,O=h.ProtoUtil.tensorDimsFromORTFormat(w),x=h.ProtoUtil.tensorDataTypeFromProto(w.dataType());A.type={shape:{dims:O},tensorType:x},S=this._allData.push(A)-1,m.set(w.name(),S)}this._allData[S]._from=-1,this._allData[S].tensor=s.Tensor.fromOrtTensor(w)}for(let v=0;v<this._allData.length;v++)this._allData[v].tensor||(this._allInputIndices.push(v),this._allInputNames.push(_[v]));for(let v=0;v<r.outputsLength();v++){const w=r.outputs(v);if(m.has(w))throw new Error(`duplicated output name: ${w}`);const S=this._allData.push(new l)-1;m.set(w,S),this._allOutputIndices.push(S),this._allOutputNames.push(w)}if(!r.nodes)throw new Error("missing information in graph: node");for(let v=0;v<r.nodesLength();v++){const w=r.nodes(v);let S=w.name();if(!S)for(let O=0;S=`unnamed_${w.opType()}_${O}`,b.has(S);O++);if(b.has(S))throw new Error(`duplicated node name: ${S}`);const A=this._nodes.push(new o(w,S))-1;b.set(S,A)}for(let v=0;v<this._nodes.length;v++){const w=this._nodes[v],S=r.nodes(v);if(S==null)throw new Error(`No node exists at index ${v}`);if((S==null?void 0:S.outputsLength())===0)throw new Error(`missing output for node: ${S.name}`);for(let A=0;A<(S==null?void 0:S.outputsLength());A++){const O=S==null?void 0:S.outputs(A);let x=m.get(O);if(x===void 0&&(x=this._allData.push(new l)-1,m.set(O,x)),w.outputs.push(x),this._allData[x]._from!==void 0)throw new Error(`multiple nodes output to one data value: ${x}`);if(this._allData[x]._from=v,S.opType()==="Constant"){if(S.attributesLength()!==1||!S.attributes(0).t())throw new Error("missing attributes or missing tensor value in attributes for this Constant operator");if(S.outputsLength()!==1)throw new Error("missing output or incorrect number of outputs for this Constant operator");w.outputs.pop(),w.executeNode=!1,this._allData[x]._from=-1,this._allData[x].tensor=s.Tensor.fromOrtTensor(S.attributes(0).t())}}}for(let v=0;v<this._nodes.length;v++){const w=this._nodes[v],S=r.nodes(v);if(S.inputsLength()===0)throw new Error(`missing input for node: ${S.name}`);for(let A=0;A<S.inputsLength();A++){const O=S.inputs(A),x=m.get(O);if(x===void 0)throw new Error(`unrecognized input '${O}' for node: ${S.name()}`);w.inputs.push(x),this._allData[x]._to.push(v)}}}checkIsAcyclic(){const r=new Set;this._allInputIndices.forEach(g=>{this._allData[g]._to.forEach(m=>{r.add(m)})});const i=Array.from(r),d=new Array(this._nodes.length).fill("white");for(;i.length>0;){const g=i.pop();d[g]==="gray"?d[g]="black":(i.push(g),d[g]="gray",this._nodes[g].outputs.forEach(m=>{const b=this._allData[m];if(b.tensor!==void 0)throw new Error("node outputs should not be initialized");if(b._from!==g)throw new Error("from property of the Value object doesn't match index of Node being processed");b._to.forEach(_=>{if(d[_]==="gray")throw new Error("model graph is cyclic");d[_]==="white"&&i.push(_)})}))}}transformGraph(r){this.removeAllIdentityNodes(),this.removeAllDropoutNodes(),this.fuseConvActivationNodes(),r&&r.transformGraph(this),this.finalizeGraph()}finalizeGraph(){let r=0;for(let i=0;i<this._nodes.length;i++)this._nodes[i].executeNode?r>0&&(this._nodes[i].inputs.forEach(d=>{const g=this._allData[d]._to.indexOf(i+r);g!==-1&&(this._allData[d]._to[g]=i)}),this._nodes[i].outputs.forEach(d=>{this._allData[d]._from&&this._allData[d]._from===i+r&&(this._allData[d]._from=i)})):(r++,this._nodes[i].outputs.forEach(d=>{this._allData[d]._from=-2}),this._nodes.splice(i,1),i--);r=0;for(let i=0;i<this._allData.length;i++)if(this._allData[i].from!==-2||this._allOutputIndices.indexOf(i+r)!==-1){if(r>0){let d=-1;this._allData[i].from!==void 0&&this._allData[i].from!==-1?(d=this._nodes[this._allData[i].from].outputs.indexOf(i+r),d!==-1&&(this._nodes[this._allData[i].from].outputs[d]=i)):(d=this._allInputIndices.indexOf(i+r),d!==-1&&(this._allInputIndices[d]=i)),this._allData[i].to.forEach(g=>{d=this._nodes[g].inputs.indexOf(i+r),d!==-1&&(this._nodes[g].inputs[d]=i)}),this._allData[i].to.length===0&&(d=this._allOutputIndices.indexOf(i+r),d!==-1&&(this._allOutputIndices[d]=i))}}else r++,this._allData.splice(i,1),i--}deleteNode(r){const i=this._nodes[r];if(i.outputs.length>1){for(let v=1;v<i.outputs.length;v++)if(this._allData[i.outputs[v]].to.length>0)throw new Error("Node deletion with more than one output connected to other nodes is not supported. ")}i.executeNode=!1;const d=i.inputs[0],g=i.outputs[0],m=this._allData[g].to,b=this._allData[d].to.indexOf(r);if(b===-1)throw new Error("The Value object doesn't have the current Node in it's 'to' property ");this._allData[d].to.splice(b,1),this._allData[g]._to=[];const _=this._allOutputIndices.indexOf(g);if(_!==-1&&(this._allOutputIndices[_]=d),m&&m.length>0)for(const v of m){const w=this._nodes[v].inputs.indexOf(g);if(w===-1)throw new Error("The Node object doesn't have the output Value in it's 'inputs' property ");this._nodes[v].inputs[w]=d,this._allData[d].to.push(v)}}removeAllDropoutNodes(){let r=0;for(const i of this._nodes){if(i.opType==="Dropout"){if(i.inputs.length!==1)throw new Error("Dropout nodes should only contain one input. ");if(i.outputs.length!==1&&i.outputs.length!==2)throw new Error("Dropout nodes should contain either 1 or 2 output(s)");if(i.outputs.length===2&&this._allData[i.outputs[1]]._to.length!==0)throw new Error("Dropout nodes's second output should not be referenced by other nodes");this.deleteNode(r)}r++}}removeAllIdentityNodes(){let r=0;for(const i of this._nodes)i.opType==="Identity"&&this.deleteNode(r),r++}isActivation(r){switch(r.opType){case"Relu":case"Sigmoid":case"Clip":return!0;default:return!1}}fuseConvActivationNodes(){for(const r of this._nodes)if(r.opType==="Conv"){const i=this._allData[r.outputs[0]]._to;if(i.length===1&&this.isActivation(this._nodes[i[0]])){const d=this._nodes[i[0]];if(d.opType==="Clip")if(d.inputs.length===1)try{r.attributes.set("activation_params","floats",[d.attributes.getFloat("min"),d.attributes.getFloat("max")])}catch{r.attributes.set("activation_params","floats",[h.MIN_CLIP,h.MAX_CLIP])}else{if(!(d.inputs.length>=3&&this._allData[d.inputs[1]].tensor!==void 0&&this._allData[d.inputs[2]].tensor!==void 0))continue;r.attributes.set("activation_params","floats",[this._allData[d.inputs[1]].tensor.floatData[0],this._allData[d.inputs[2]].tensor.floatData[0]])}r.attributes.set("activation","string",d.opType),this.deleteNode(i[0])}}}}},6231:(y,n)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.now=n.Profiler=n.Logger=void 0;const a={verbose:1e3,info:2e3,warning:4e3,error:5e3,fatal:6e3},u={none:new class{log(o,t,e){}},console:new class{log(o,t,e){console.log(`${this.color(o)} ${e?"\x1B[35m"+e+"\x1B[0m ":""}${t}`)}color(o){switch(o){case"verbose":return"\x1B[34;40mv\x1B[0m";case"info":return"\x1B[32mi\x1B[0m";case"warning":return"\x1B[30;43mw\x1B[0m";case"error":return"\x1B[31;40me\x1B[0m";case"fatal":return"\x1B[101mf\x1B[0m";default:throw new Error(`unsupported severity: ${o}`)}}}},c={provider:"console",minimalSeverity:"warning",logDateTime:!0,logSourceLocation:!1};let p={"":c};function s(o,t,e,r){if(t===void 0)return i=o,{verbose:s.verbose.bind(null,i),info:s.info.bind(null,i),warning:s.warning.bind(null,i),error:s.error.bind(null,i),fatal:s.fatal.bind(null,i)};if(e===void 0)h(o,t);else if(typeof e=="number"&&r===void 0)h(o,t);else if(typeof e=="string"&&r===void 0)h(o,e,0,t);else{if(typeof e!="string"||typeof r!="number")throw new TypeError("input is valid");h(o,e,0,t)}var i}function h(o,t,e,r){const i=p[r||""]||p[""];a[o]<a[i.minimalSeverity]||(i.logDateTime&&(t=`${new Date().toISOString()}|${t}`),i.logSourceLocation,u[i.provider].log(o,t,r))}(function(o){function t(r){p={},e("",r||{})}function e(r,i){if(r==="*")t(i);else{const d=p[r]||c;p[r]={provider:i.provider||d.provider,minimalSeverity:i.minimalSeverity||d.minimalSeverity,logDateTime:i.logDateTime===void 0?d.logDateTime:i.logDateTime,logSourceLocation:i.logSourceLocation===void 0?d.logSourceLocation:i.logSourceLocation}}}o.verbose=function(r,i){o("verbose",r,i)},o.info=function(r,i){o("info",r,i)},o.warning=function(r,i){o("warning",r,i)},o.error=function(r,i){o("error",r,i)},o.fatal=function(r,i){o("fatal",r,i)},o.reset=t,o.set=e,o.setWithEnv=function(r){const i={};r.logLevel&&(i.minimalSeverity=r.logLevel),e("",i)}})(s||(s={})),n.Logger=s;class f{constructor(t,e,r,i,d,g){this.category=t,this.name=e,this.startTime=r,this.endCallback=i,this.timer=d,this.ctx=g}end(){return this.endCallback(this)}async checkTimer(){if(this.ctx===void 0||this.timer===void 0)throw new Error("No webgl timer found");return this.ctx.endTimer(),this.ctx.waitForQueryAndGetTime(this.timer)}}class l{constructor(t,e,r,i){this.category=t,this.name=e,this.startTime=r,this.endTime=i}}n.Profiler=class{static create(o){return o===void 0?new this:new this(o.maxNumberEvents,o.flushBatchSize,o.flushIntervalInMilliseconds)}constructor(o,t,e){this._started=!1,this._flushPointer=0,this._started=!1,this._maxNumberEvents=o===void 0?1e4:o,this._flushBatchSize=t===void 0?10:t,this._flushIntervalInMilliseconds=e===void 0?5e3:e}start(){this._started=!0,this._timingEvents=[],this._flushTime=(0,n.now)(),this._flushPointer=0}stop(){for(this._started=!1;this._flushPointer<this._timingEvents.length;this._flushPointer++)this.logOneEvent(this._timingEvents[this._flushPointer])}event(o,t,e,r){const i=this._started?this.begin(o,t,r):void 0;let d=!1;const g=e();if(g&&typeof g.then=="function")return d=!0,new Promise((m,b)=>{g.then(async _=>{i&&await i.end(),m(_)},async _=>{i&&await i.end(),b(_)})});if(!d&&i){const m=i.end();if(m&&typeof m.then=="function")return new Promise((b,_)=>{m.then(()=>{b(g)},v=>{_(v)})})}return g}begin(o,t,e){if(!this._started)throw new Error("profiler is not started yet");if(e===void 0){const r=(0,n.now)();return this.flush(r),new f(o,t,r,i=>this.endSync(i))}{const r=e.beginTimer();return new f(o,t,0,async i=>this.end(i),r,e)}}async end(o){const t=await o.checkTimer();this._timingEvents.length<this._maxNumberEvents&&(this._timingEvents.push(new l(o.category,o.name,o.startTime,t)),this.flush(t))}endSync(o){const t=(0,n.now)();this._timingEvents.length<this._maxNumberEvents&&(this._timingEvents.push(new l(o.category,o.name,o.startTime,t)),this.flush(t))}logOneEvent(o){n.Logger.verbose(`Profiler.${o.category}`,`${(o.endTime-o.startTime).toFixed(2)}ms on event '${o.name}' at ${o.endTime.toFixed(2)}`)}flush(o){if(this._timingEvents.length-this._flushPointer>=this._flushBatchSize||o-this._flushTime>=this._flushIntervalInMilliseconds){for(const t=this._flushPointer;this._flushPointer<t+this._flushBatchSize&&this._flushPointer<this._timingEvents.length;this._flushPointer++)this.logOneEvent(this._timingEvents[this._flushPointer]);this._flushTime=(0,n.now)()}}get started(){return this._started}},n.now=typeof performance<"u"&&performance.now?()=>performance.now():Date.now},2644:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.Model=void 0;const u=a(5686),c=a(1446),p=a(7070),s=a(9395),h=a(2517);var f=s.onnxruntime.experimental.fbs;n.Model=class{constructor(){}load(l,o,t){if(!t)try{return void this.loadFromOnnxFormat(l,o)}catch(e){if(t!==void 0)throw e}this.loadFromOrtFormat(l,o)}loadFromOnnxFormat(l,o){const t=c.onnx.ModelProto.decode(l);if(h.LongUtil.longToNumber(t.irVersion)<3)throw new Error("only support ONNX model with IR_VERSION>=3");this._opsets=t.opsetImport.map(e=>({domain:e.domain,version:h.LongUtil.longToNumber(e.version)})),this._graph=p.Graph.from(t.graph,o)}loadFromOrtFormat(l,o){const t=new u.flatbuffers.ByteBuffer(l),e=f.InferenceSession.getRootAsInferenceSession(t).model();if(h.LongUtil.longToNumber(e.irVersion())<3)throw new Error("only support ONNX model with IR_VERSION>=3");this._opsets=[];for(let r=0;r<e.opsetImportLength();r++){const i=e.opsetImport(r);this._opsets.push({domain:i==null?void 0:i.domain(),version:h.LongUtil.longToNumber(i.version())})}this._graph=p.Graph.from(e.graph(),o)}get graph(){return this._graph}get opsets(){return this._opsets}}},782:(y,n)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.FLOAT_TYPES=n.INT_TYPES=n.NUMBER_TYPES=void 0,n.NUMBER_TYPES=["float32","float64","int32","int16","int8","uint16","uint32","uint8"],n.INT_TYPES=["int32","int16","int8","uint16","uint32","uint8"],n.FLOAT_TYPES=["float32","float64"]},1047:(y,n)=>{function a(u,c){if(c.endsWith("+")){const p=Number.parseInt(c.substring(0,c.length-1),10);return!isNaN(p)&&p<=u}if(c.split("-").length===2){const p=c.split("-"),s=Number.parseInt(p[0],10),h=Number.parseInt(p[1],10);return!isNaN(s)&&!isNaN(h)&&s<=u&&u<=h}return Number.parseInt(c,10)===u}Object.defineProperty(n,"__esModule",{value:!0}),n.resolveOperator=void 0,n.resolveOperator=function(u,c,p){for(const s of p){const h=s[0],f=s[1],l=s[2],o=s[3],t=s[4];if(u.opType===h){for(const e of c)if((e.domain===f||e.domain==="ai.onnx"&&f==="")&&a(e.version,l))return{opImpl:o,opInit:t}}}throw new TypeError(`cannot resolve operator '${u.opType}' with opsets: ${c.map(s=>`${s.domain||"ai.onnx"} v${s.version}`).join(", ")}`)}},9395:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.onnxruntime=void 0;const u=a(5686);var c,p;c=n.onnxruntime||(n.onnxruntime={}),function(s){(function(h){h[h.UNDEFINED=0]="UNDEFINED",h[h.FLOAT=1]="FLOAT",h[h.INT=2]="INT",h[h.STRING=3]="STRING",h[h.TENSOR=4]="TENSOR",h[h.GRAPH=5]="GRAPH",h[h.FLOATS=6]="FLOATS",h[h.INTS=7]="INTS",h[h.STRINGS=8]="STRINGS",h[h.TENSORS=9]="TENSORS",h[h.GRAPHS=10]="GRAPHS",h[h.SPARSE_TENSOR=11]="SPARSE_TENSOR",h[h.SPARSE_TENSORS=12]="SPARSE_TENSORS"})(s.AttributeType||(s.AttributeType={}))}((p=c.experimental||(c.experimental={})).fbs||(p.fbs={})),function(s){(function(h){(function(f){(function(l){l[l.UNKNOWN=0]="UNKNOWN",l[l.VALUE=1]="VALUE",l[l.PARAM=2]="PARAM"})(f.DimensionValueType||(f.DimensionValueType={}))})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){(function(l){l[l.UNDEFINED=0]="UNDEFINED",l[l.FLOAT=1]="FLOAT",l[l.UINT8=2]="UINT8",l[l.INT8=3]="INT8",l[l.UINT16=4]="UINT16",l[l.INT16=5]="INT16",l[l.INT32=6]="INT32",l[l.INT64=7]="INT64",l[l.STRING=8]="STRING",l[l.BOOL=9]="BOOL",l[l.FLOAT16=10]="FLOAT16",l[l.DOUBLE=11]="DOUBLE",l[l.UINT32=12]="UINT32",l[l.UINT64=13]="UINT64",l[l.COMPLEX64=14]="COMPLEX64",l[l.COMPLEX128=15]="COMPLEX128",l[l.BFLOAT16=16]="BFLOAT16"})(f.TensorDataType||(f.TensorDataType={}))})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){(function(l){l[l.Primitive=0]="Primitive",l[l.Fused=1]="Fused"})(f.NodeType||(f.NodeType={}))})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){(function(l){l[l.NONE=0]="NONE",l[l.tensor_type=1]="tensor_type",l[l.sequence_type=2]="sequence_type",l[l.map_type=3]="map_type"})(f.TypeInfoValue||(f.TypeInfoValue={}))})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsShape(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsShape(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}dim(t,e){let r=this.bb.__offset(this.bb_pos,4);return r?(e||new s.experimental.fbs.Dimension).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}dimLength(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.__vector_len(this.bb_pos+t):0}static startShape(t){t.startObject(1)}static addDim(t,e){t.addFieldOffset(0,e,0)}static createDimVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startDimVector(t,e){t.startVector(4,e,4)}static endShape(t){return t.endObject()}static createShape(t,e){return l.startShape(t),l.addDim(t,e),l.endShape(t)}}f.Shape=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsDimension(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsDimension(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}value(t){let e=this.bb.__offset(this.bb_pos,4);return e?(t||new s.experimental.fbs.DimensionValue).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}denotation(t){let e=this.bb.__offset(this.bb_pos,6);return e?this.bb.__string(this.bb_pos+e,t):null}static startDimension(t){t.startObject(2)}static addValue(t,e){t.addFieldOffset(0,e,0)}static addDenotation(t,e){t.addFieldOffset(1,e,0)}static endDimension(t){return t.endObject()}static createDimension(t,e,r){return l.startDimension(t),l.addValue(t,e),l.addDenotation(t,r),l.endDimension(t)}}f.Dimension=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsDimensionValue(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsDimensionValue(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}dimType(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt8(this.bb_pos+t):s.experimental.fbs.DimensionValueType.UNKNOWN}dimValue(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}dimParam(t){let e=this.bb.__offset(this.bb_pos,8);return e?this.bb.__string(this.bb_pos+e,t):null}static startDimensionValue(t){t.startObject(3)}static addDimType(t,e){t.addFieldInt8(0,e,s.experimental.fbs.DimensionValueType.UNKNOWN)}static addDimValue(t,e){t.addFieldInt64(1,e,t.createLong(0,0))}static addDimParam(t,e){t.addFieldOffset(2,e,0)}static endDimensionValue(t){return t.endObject()}static createDimensionValue(t,e,r,i){return l.startDimensionValue(t),l.addDimType(t,e),l.addDimValue(t,r),l.addDimParam(t,i),l.endDimensionValue(t)}}f.DimensionValue=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsTensorTypeAndShape(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsTensorTypeAndShape(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}elemType(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt32(this.bb_pos+t):s.experimental.fbs.TensorDataType.UNDEFINED}shape(t){let e=this.bb.__offset(this.bb_pos,6);return e?(t||new s.experimental.fbs.Shape).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}static startTensorTypeAndShape(t){t.startObject(2)}static addElemType(t,e){t.addFieldInt32(0,e,s.experimental.fbs.TensorDataType.UNDEFINED)}static addShape(t,e){t.addFieldOffset(1,e,0)}static endTensorTypeAndShape(t){return t.endObject()}static createTensorTypeAndShape(t,e,r){return l.startTensorTypeAndShape(t),l.addElemType(t,e),l.addShape(t,r),l.endTensorTypeAndShape(t)}}f.TensorTypeAndShape=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsMapType(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsMapType(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}keyType(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt32(this.bb_pos+t):s.experimental.fbs.TensorDataType.UNDEFINED}valueType(t){let e=this.bb.__offset(this.bb_pos,6);return e?(t||new s.experimental.fbs.TypeInfo).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}static startMapType(t){t.startObject(2)}static addKeyType(t,e){t.addFieldInt32(0,e,s.experimental.fbs.TensorDataType.UNDEFINED)}static addValueType(t,e){t.addFieldOffset(1,e,0)}static endMapType(t){return t.endObject()}static createMapType(t,e,r){return l.startMapType(t),l.addKeyType(t,e),l.addValueType(t,r),l.endMapType(t)}}f.MapType=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsSequenceType(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsSequenceType(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}elemType(t){let e=this.bb.__offset(this.bb_pos,4);return e?(t||new s.experimental.fbs.TypeInfo).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}static startSequenceType(t){t.startObject(1)}static addElemType(t,e){t.addFieldOffset(0,e,0)}static endSequenceType(t){return t.endObject()}static createSequenceType(t,e){return l.startSequenceType(t),l.addElemType(t,e),l.endSequenceType(t)}}f.SequenceType=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(h.fbs||(h.fbs={})).EdgeEnd=class{constructor(){this.bb=null,this.bb_pos=0}__init(f,l){return this.bb_pos=f,this.bb=l,this}nodeIndex(){return this.bb.readUint32(this.bb_pos)}srcArgIndex(){return this.bb.readInt32(this.bb_pos+4)}dstArgIndex(){return this.bb.readInt32(this.bb_pos+8)}static createEdgeEnd(f,l,o,t){return f.prep(4,12),f.writeInt32(t),f.writeInt32(o),f.writeInt32(l),f.offset()}}})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsNodeEdge(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsNodeEdge(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}nodeIndex(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readUint32(this.bb_pos+t):0}inputEdges(t,e){let r=this.bb.__offset(this.bb_pos,6);return r?(e||new s.experimental.fbs.EdgeEnd).__init(this.bb.__vector(this.bb_pos+r)+12*t,this.bb):null}inputEdgesLength(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.__vector_len(this.bb_pos+t):0}outputEdges(t,e){let r=this.bb.__offset(this.bb_pos,8);return r?(e||new s.experimental.fbs.EdgeEnd).__init(this.bb.__vector(this.bb_pos+r)+12*t,this.bb):null}outputEdgesLength(){let t=this.bb.__offset(this.bb_pos,8);return t?this.bb.__vector_len(this.bb_pos+t):0}static startNodeEdge(t){t.startObject(3)}static addNodeIndex(t,e){t.addFieldInt32(0,e,0)}static addInputEdges(t,e){t.addFieldOffset(1,e,0)}static startInputEdgesVector(t,e){t.startVector(12,e,4)}static addOutputEdges(t,e){t.addFieldOffset(2,e,0)}static startOutputEdgesVector(t,e){t.startVector(12,e,4)}static endNodeEdge(t){return t.endObject()}static createNodeEdge(t,e,r,i){return l.startNodeEdge(t),l.addNodeIndex(t,e),l.addInputEdges(t,r),l.addOutputEdges(t,i),l.endNodeEdge(t)}}f.NodeEdge=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsNode(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsNode(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}name(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}docString(t){let e=this.bb.__offset(this.bb_pos,6);return e?this.bb.__string(this.bb_pos+e,t):null}domain(t){let e=this.bb.__offset(this.bb_pos,8);return e?this.bb.__string(this.bb_pos+e,t):null}sinceVersion(){let t=this.bb.__offset(this.bb_pos,10);return t?this.bb.readInt32(this.bb_pos+t):0}index(){let t=this.bb.__offset(this.bb_pos,12);return t?this.bb.readUint32(this.bb_pos+t):0}opType(t){let e=this.bb.__offset(this.bb_pos,14);return e?this.bb.__string(this.bb_pos+e,t):null}type(){let t=this.bb.__offset(this.bb_pos,16);return t?this.bb.readInt32(this.bb_pos+t):s.experimental.fbs.NodeType.Primitive}executionProviderType(t){let e=this.bb.__offset(this.bb_pos,18);return e?this.bb.__string(this.bb_pos+e,t):null}inputs(t,e){let r=this.bb.__offset(this.bb_pos,20);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}inputsLength(){let t=this.bb.__offset(this.bb_pos,20);return t?this.bb.__vector_len(this.bb_pos+t):0}outputs(t,e){let r=this.bb.__offset(this.bb_pos,22);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}outputsLength(){let t=this.bb.__offset(this.bb_pos,22);return t?this.bb.__vector_len(this.bb_pos+t):0}attributes(t,e){let r=this.bb.__offset(this.bb_pos,24);return r?(e||new s.experimental.fbs.Attribute).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}attributesLength(){let t=this.bb.__offset(this.bb_pos,24);return t?this.bb.__vector_len(this.bb_pos+t):0}inputArgCounts(t){let e=this.bb.__offset(this.bb_pos,26);return e?this.bb.readInt32(this.bb.__vector(this.bb_pos+e)+4*t):0}inputArgCountsLength(){let t=this.bb.__offset(this.bb_pos,26);return t?this.bb.__vector_len(this.bb_pos+t):0}inputArgCountsArray(){let t=this.bb.__offset(this.bb_pos,26);return t?new Int32Array(this.bb.bytes().buffer,this.bb.bytes().byteOffset+this.bb.__vector(this.bb_pos+t),this.bb.__vector_len(this.bb_pos+t)):null}implicitInputs(t,e){let r=this.bb.__offset(this.bb_pos,28);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}implicitInputsLength(){let t=this.bb.__offset(this.bb_pos,28);return t?this.bb.__vector_len(this.bb_pos+t):0}static startNode(t){t.startObject(13)}static addName(t,e){t.addFieldOffset(0,e,0)}static addDocString(t,e){t.addFieldOffset(1,e,0)}static addDomain(t,e){t.addFieldOffset(2,e,0)}static addSinceVersion(t,e){t.addFieldInt32(3,e,0)}static addIndex(t,e){t.addFieldInt32(4,e,0)}static addOpType(t,e){t.addFieldOffset(5,e,0)}static addType(t,e){t.addFieldInt32(6,e,s.experimental.fbs.NodeType.Primitive)}static addExecutionProviderType(t,e){t.addFieldOffset(7,e,0)}static addInputs(t,e){t.addFieldOffset(8,e,0)}static createInputsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startInputsVector(t,e){t.startVector(4,e,4)}static addOutputs(t,e){t.addFieldOffset(9,e,0)}static createOutputsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startOutputsVector(t,e){t.startVector(4,e,4)}static addAttributes(t,e){t.addFieldOffset(10,e,0)}static createAttributesVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startAttributesVector(t,e){t.startVector(4,e,4)}static addInputArgCounts(t,e){t.addFieldOffset(11,e,0)}static createInputArgCountsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addInt32(e[r]);return t.endVector()}static startInputArgCountsVector(t,e){t.startVector(4,e,4)}static addImplicitInputs(t,e){t.addFieldOffset(12,e,0)}static createImplicitInputsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startImplicitInputsVector(t,e){t.startVector(4,e,4)}static endNode(t){return t.endObject()}static createNode(t,e,r,i,d,g,m,b,_,v,w,S,A,O){return l.startNode(t),l.addName(t,e),l.addDocString(t,r),l.addDomain(t,i),l.addSinceVersion(t,d),l.addIndex(t,g),l.addOpType(t,m),l.addType(t,b),l.addExecutionProviderType(t,_),l.addInputs(t,v),l.addOutputs(t,w),l.addAttributes(t,S),l.addInputArgCounts(t,A),l.addImplicitInputs(t,O),l.endNode(t)}}f.Node=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsValueInfo(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsValueInfo(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}name(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}docString(t){let e=this.bb.__offset(this.bb_pos,6);return e?this.bb.__string(this.bb_pos+e,t):null}type(t){let e=this.bb.__offset(this.bb_pos,8);return e?(t||new s.experimental.fbs.TypeInfo).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}static startValueInfo(t){t.startObject(3)}static addName(t,e){t.addFieldOffset(0,e,0)}static addDocString(t,e){t.addFieldOffset(1,e,0)}static addType(t,e){t.addFieldOffset(2,e,0)}static endValueInfo(t){return t.endObject()}static createValueInfo(t,e,r,i){return l.startValueInfo(t),l.addName(t,e),l.addDocString(t,r),l.addType(t,i),l.endValueInfo(t)}}f.ValueInfo=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsTypeInfo(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsTypeInfo(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}denotation(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}valueType(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.readUint8(this.bb_pos+t):s.experimental.fbs.TypeInfoValue.NONE}value(t){let e=this.bb.__offset(this.bb_pos,8);return e?this.bb.__union(t,this.bb_pos+e):null}static startTypeInfo(t){t.startObject(3)}static addDenotation(t,e){t.addFieldOffset(0,e,0)}static addValueType(t,e){t.addFieldInt8(1,e,s.experimental.fbs.TypeInfoValue.NONE)}static addValue(t,e){t.addFieldOffset(2,e,0)}static endTypeInfo(t){return t.endObject()}static createTypeInfo(t,e,r,i){return l.startTypeInfo(t),l.addDenotation(t,e),l.addValueType(t,r),l.addValue(t,i),l.endTypeInfo(t)}}f.TypeInfo=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsOperatorSetId(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsOperatorSetId(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}domain(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}version(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}static startOperatorSetId(t){t.startObject(2)}static addDomain(t,e){t.addFieldOffset(0,e,0)}static addVersion(t,e){t.addFieldInt64(1,e,t.createLong(0,0))}static endOperatorSetId(t){return t.endObject()}static createOperatorSetId(t,e,r){return l.startOperatorSetId(t),l.addDomain(t,e),l.addVersion(t,r),l.endOperatorSetId(t)}}f.OperatorSetId=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsTensor(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsTensor(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}name(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}docString(t){let e=this.bb.__offset(this.bb_pos,6);return e?this.bb.__string(this.bb_pos+e,t):null}dims(t){let e=this.bb.__offset(this.bb_pos,8);return e?this.bb.readInt64(this.bb.__vector(this.bb_pos+e)+8*t):this.bb.createLong(0,0)}dimsLength(){let t=this.bb.__offset(this.bb_pos,8);return t?this.bb.__vector_len(this.bb_pos+t):0}dataType(){let t=this.bb.__offset(this.bb_pos,10);return t?this.bb.readInt32(this.bb_pos+t):s.experimental.fbs.TensorDataType.UNDEFINED}rawData(t){let e=this.bb.__offset(this.bb_pos,12);return e?this.bb.readUint8(this.bb.__vector(this.bb_pos+e)+t):0}rawDataLength(){let t=this.bb.__offset(this.bb_pos,12);return t?this.bb.__vector_len(this.bb_pos+t):0}rawDataArray(){let t=this.bb.__offset(this.bb_pos,12);return t?new Uint8Array(this.bb.bytes().buffer,this.bb.bytes().byteOffset+this.bb.__vector(this.bb_pos+t),this.bb.__vector_len(this.bb_pos+t)):null}stringData(t,e){let r=this.bb.__offset(this.bb_pos,14);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}stringDataLength(){let t=this.bb.__offset(this.bb_pos,14);return t?this.bb.__vector_len(this.bb_pos+t):0}static startTensor(t){t.startObject(6)}static addName(t,e){t.addFieldOffset(0,e,0)}static addDocString(t,e){t.addFieldOffset(1,e,0)}static addDims(t,e){t.addFieldOffset(2,e,0)}static createDimsVector(t,e){t.startVector(8,e.length,8);for(let r=e.length-1;r>=0;r--)t.addInt64(e[r]);return t.endVector()}static startDimsVector(t,e){t.startVector(8,e,8)}static addDataType(t,e){t.addFieldInt32(3,e,s.experimental.fbs.TensorDataType.UNDEFINED)}static addRawData(t,e){t.addFieldOffset(4,e,0)}static createRawDataVector(t,e){t.startVector(1,e.length,1);for(let r=e.length-1;r>=0;r--)t.addInt8(e[r]);return t.endVector()}static startRawDataVector(t,e){t.startVector(1,e,1)}static addStringData(t,e){t.addFieldOffset(5,e,0)}static createStringDataVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startStringDataVector(t,e){t.startVector(4,e,4)}static endTensor(t){return t.endObject()}static createTensor(t,e,r,i,d,g,m){return l.startTensor(t),l.addName(t,e),l.addDocString(t,r),l.addDims(t,i),l.addDataType(t,d),l.addRawData(t,g),l.addStringData(t,m),l.endTensor(t)}}f.Tensor=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsSparseTensor(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsSparseTensor(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}values(t){let e=this.bb.__offset(this.bb_pos,4);return e?(t||new s.experimental.fbs.Tensor).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}indices(t){let e=this.bb.__offset(this.bb_pos,6);return e?(t||new s.experimental.fbs.Tensor).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}dims(t){let e=this.bb.__offset(this.bb_pos,8);return e?this.bb.readInt64(this.bb.__vector(this.bb_pos+e)+8*t):this.bb.createLong(0,0)}dimsLength(){let t=this.bb.__offset(this.bb_pos,8);return t?this.bb.__vector_len(this.bb_pos+t):0}static startSparseTensor(t){t.startObject(3)}static addValues(t,e){t.addFieldOffset(0,e,0)}static addIndices(t,e){t.addFieldOffset(1,e,0)}static addDims(t,e){t.addFieldOffset(2,e,0)}static createDimsVector(t,e){t.startVector(8,e.length,8);for(let r=e.length-1;r>=0;r--)t.addInt64(e[r]);return t.endVector()}static startDimsVector(t,e){t.startVector(8,e,8)}static endSparseTensor(t){return t.endObject()}static createSparseTensor(t,e,r,i){return l.startSparseTensor(t),l.addValues(t,e),l.addIndices(t,r),l.addDims(t,i),l.endSparseTensor(t)}}f.SparseTensor=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsAttribute(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsAttribute(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}name(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}docString(t){let e=this.bb.__offset(this.bb_pos,6);return e?this.bb.__string(this.bb_pos+e,t):null}type(){let t=this.bb.__offset(this.bb_pos,8);return t?this.bb.readInt32(this.bb_pos+t):s.experimental.fbs.AttributeType.UNDEFINED}f(){let t=this.bb.__offset(this.bb_pos,10);return t?this.bb.readFloat32(this.bb_pos+t):0}i(){let t=this.bb.__offset(this.bb_pos,12);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}s(t){let e=this.bb.__offset(this.bb_pos,14);return e?this.bb.__string(this.bb_pos+e,t):null}t(t){let e=this.bb.__offset(this.bb_pos,16);return e?(t||new s.experimental.fbs.Tensor).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}g(t){let e=this.bb.__offset(this.bb_pos,18);return e?(t||new s.experimental.fbs.Graph).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}floats(t){let e=this.bb.__offset(this.bb_pos,20);return e?this.bb.readFloat32(this.bb.__vector(this.bb_pos+e)+4*t):0}floatsLength(){let t=this.bb.__offset(this.bb_pos,20);return t?this.bb.__vector_len(this.bb_pos+t):0}floatsArray(){let t=this.bb.__offset(this.bb_pos,20);return t?new Float32Array(this.bb.bytes().buffer,this.bb.bytes().byteOffset+this.bb.__vector(this.bb_pos+t),this.bb.__vector_len(this.bb_pos+t)):null}ints(t){let e=this.bb.__offset(this.bb_pos,22);return e?this.bb.readInt64(this.bb.__vector(this.bb_pos+e)+8*t):this.bb.createLong(0,0)}intsLength(){let t=this.bb.__offset(this.bb_pos,22);return t?this.bb.__vector_len(this.bb_pos+t):0}strings(t,e){let r=this.bb.__offset(this.bb_pos,24);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}stringsLength(){let t=this.bb.__offset(this.bb_pos,24);return t?this.bb.__vector_len(this.bb_pos+t):0}tensors(t,e){let r=this.bb.__offset(this.bb_pos,26);return r?(e||new s.experimental.fbs.Tensor).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}tensorsLength(){let t=this.bb.__offset(this.bb_pos,26);return t?this.bb.__vector_len(this.bb_pos+t):0}graphs(t,e){let r=this.bb.__offset(this.bb_pos,28);return r?(e||new s.experimental.fbs.Graph).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}graphsLength(){let t=this.bb.__offset(this.bb_pos,28);return t?this.bb.__vector_len(this.bb_pos+t):0}static startAttribute(t){t.startObject(13)}static addName(t,e){t.addFieldOffset(0,e,0)}static addDocString(t,e){t.addFieldOffset(1,e,0)}static addType(t,e){t.addFieldInt32(2,e,s.experimental.fbs.AttributeType.UNDEFINED)}static addF(t,e){t.addFieldFloat32(3,e,0)}static addI(t,e){t.addFieldInt64(4,e,t.createLong(0,0))}static addS(t,e){t.addFieldOffset(5,e,0)}static addT(t,e){t.addFieldOffset(6,e,0)}static addG(t,e){t.addFieldOffset(7,e,0)}static addFloats(t,e){t.addFieldOffset(8,e,0)}static createFloatsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addFloat32(e[r]);return t.endVector()}static startFloatsVector(t,e){t.startVector(4,e,4)}static addInts(t,e){t.addFieldOffset(9,e,0)}static createIntsVector(t,e){t.startVector(8,e.length,8);for(let r=e.length-1;r>=0;r--)t.addInt64(e[r]);return t.endVector()}static startIntsVector(t,e){t.startVector(8,e,8)}static addStrings(t,e){t.addFieldOffset(10,e,0)}static createStringsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startStringsVector(t,e){t.startVector(4,e,4)}static addTensors(t,e){t.addFieldOffset(11,e,0)}static createTensorsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startTensorsVector(t,e){t.startVector(4,e,4)}static addGraphs(t,e){t.addFieldOffset(12,e,0)}static createGraphsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startGraphsVector(t,e){t.startVector(4,e,4)}static endAttribute(t){return t.endObject()}static createAttribute(t,e,r,i,d,g,m,b,_,v,w,S,A,O){return l.startAttribute(t),l.addName(t,e),l.addDocString(t,r),l.addType(t,i),l.addF(t,d),l.addI(t,g),l.addS(t,m),l.addT(t,b),l.addG(t,_),l.addFloats(t,v),l.addInts(t,w),l.addStrings(t,S),l.addTensors(t,A),l.addGraphs(t,O),l.endAttribute(t)}}f.Attribute=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsGraph(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsGraph(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}initializers(t,e){let r=this.bb.__offset(this.bb_pos,4);return r?(e||new s.experimental.fbs.Tensor).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}initializersLength(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.__vector_len(this.bb_pos+t):0}nodeArgs(t,e){let r=this.bb.__offset(this.bb_pos,6);return r?(e||new s.experimental.fbs.ValueInfo).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}nodeArgsLength(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.__vector_len(this.bb_pos+t):0}nodes(t,e){let r=this.bb.__offset(this.bb_pos,8);return r?(e||new s.experimental.fbs.Node).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}nodesLength(){let t=this.bb.__offset(this.bb_pos,8);return t?this.bb.__vector_len(this.bb_pos+t):0}maxNodeIndex(){let t=this.bb.__offset(this.bb_pos,10);return t?this.bb.readUint32(this.bb_pos+t):0}nodeEdges(t,e){let r=this.bb.__offset(this.bb_pos,12);return r?(e||new s.experimental.fbs.NodeEdge).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}nodeEdgesLength(){let t=this.bb.__offset(this.bb_pos,12);return t?this.bb.__vector_len(this.bb_pos+t):0}inputs(t,e){let r=this.bb.__offset(this.bb_pos,14);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}inputsLength(){let t=this.bb.__offset(this.bb_pos,14);return t?this.bb.__vector_len(this.bb_pos+t):0}outputs(t,e){let r=this.bb.__offset(this.bb_pos,16);return r?this.bb.__string(this.bb.__vector(this.bb_pos+r)+4*t,e):null}outputsLength(){let t=this.bb.__offset(this.bb_pos,16);return t?this.bb.__vector_len(this.bb_pos+t):0}sparseInitializers(t,e){let r=this.bb.__offset(this.bb_pos,18);return r?(e||new s.experimental.fbs.SparseTensor).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}sparseInitializersLength(){let t=this.bb.__offset(this.bb_pos,18);return t?this.bb.__vector_len(this.bb_pos+t):0}static startGraph(t){t.startObject(8)}static addInitializers(t,e){t.addFieldOffset(0,e,0)}static createInitializersVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startInitializersVector(t,e){t.startVector(4,e,4)}static addNodeArgs(t,e){t.addFieldOffset(1,e,0)}static createNodeArgsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startNodeArgsVector(t,e){t.startVector(4,e,4)}static addNodes(t,e){t.addFieldOffset(2,e,0)}static createNodesVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startNodesVector(t,e){t.startVector(4,e,4)}static addMaxNodeIndex(t,e){t.addFieldInt32(3,e,0)}static addNodeEdges(t,e){t.addFieldOffset(4,e,0)}static createNodeEdgesVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startNodeEdgesVector(t,e){t.startVector(4,e,4)}static addInputs(t,e){t.addFieldOffset(5,e,0)}static createInputsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startInputsVector(t,e){t.startVector(4,e,4)}static addOutputs(t,e){t.addFieldOffset(6,e,0)}static createOutputsVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startOutputsVector(t,e){t.startVector(4,e,4)}static addSparseInitializers(t,e){t.addFieldOffset(7,e,0)}static createSparseInitializersVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startSparseInitializersVector(t,e){t.startVector(4,e,4)}static endGraph(t){return t.endObject()}static createGraph(t,e,r,i,d,g,m,b,_){return l.startGraph(t),l.addInitializers(t,e),l.addNodeArgs(t,r),l.addNodes(t,i),l.addMaxNodeIndex(t,d),l.addNodeEdges(t,g),l.addInputs(t,m),l.addOutputs(t,b),l.addSparseInitializers(t,_),l.endGraph(t)}}f.Graph=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsModel(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsModel(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}irVersion(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}opsetImport(t,e){let r=this.bb.__offset(this.bb_pos,6);return r?(e||new s.experimental.fbs.OperatorSetId).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}opsetImportLength(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.__vector_len(this.bb_pos+t):0}producerName(t){let e=this.bb.__offset(this.bb_pos,8);return e?this.bb.__string(this.bb_pos+e,t):null}producerVersion(t){let e=this.bb.__offset(this.bb_pos,10);return e?this.bb.__string(this.bb_pos+e,t):null}domain(t){let e=this.bb.__offset(this.bb_pos,12);return e?this.bb.__string(this.bb_pos+e,t):null}modelVersion(){let t=this.bb.__offset(this.bb_pos,14);return t?this.bb.readInt64(this.bb_pos+t):this.bb.createLong(0,0)}docString(t){let e=this.bb.__offset(this.bb_pos,16);return e?this.bb.__string(this.bb_pos+e,t):null}graph(t){let e=this.bb.__offset(this.bb_pos,18);return e?(t||new s.experimental.fbs.Graph).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}graphDocString(t){let e=this.bb.__offset(this.bb_pos,20);return e?this.bb.__string(this.bb_pos+e,t):null}static startModel(t){t.startObject(9)}static addIrVersion(t,e){t.addFieldInt64(0,e,t.createLong(0,0))}static addOpsetImport(t,e){t.addFieldOffset(1,e,0)}static createOpsetImportVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startOpsetImportVector(t,e){t.startVector(4,e,4)}static addProducerName(t,e){t.addFieldOffset(2,e,0)}static addProducerVersion(t,e){t.addFieldOffset(3,e,0)}static addDomain(t,e){t.addFieldOffset(4,e,0)}static addModelVersion(t,e){t.addFieldInt64(5,e,t.createLong(0,0))}static addDocString(t,e){t.addFieldOffset(6,e,0)}static addGraph(t,e){t.addFieldOffset(7,e,0)}static addGraphDocString(t,e){t.addFieldOffset(8,e,0)}static endModel(t){return t.endObject()}static createModel(t,e,r,i,d,g,m,b,_,v){return l.startModel(t),l.addIrVersion(t,e),l.addOpsetImport(t,r),l.addProducerName(t,i),l.addProducerVersion(t,d),l.addDomain(t,g),l.addModelVersion(t,m),l.addDocString(t,b),l.addGraph(t,_),l.addGraphDocString(t,v),l.endModel(t)}}f.Model=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsKernelCreateInfos(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsKernelCreateInfos(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}nodeIndices(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.readUint32(this.bb.__vector(this.bb_pos+e)+4*t):0}nodeIndicesLength(){let t=this.bb.__offset(this.bb_pos,4);return t?this.bb.__vector_len(this.bb_pos+t):0}nodeIndicesArray(){let t=this.bb.__offset(this.bb_pos,4);return t?new Uint32Array(this.bb.bytes().buffer,this.bb.bytes().byteOffset+this.bb.__vector(this.bb_pos+t),this.bb.__vector_len(this.bb_pos+t)):null}kernelDefHashes(t){let e=this.bb.__offset(this.bb_pos,6);return e?this.bb.readUint64(this.bb.__vector(this.bb_pos+e)+8*t):this.bb.createLong(0,0)}kernelDefHashesLength(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.__vector_len(this.bb_pos+t):0}static startKernelCreateInfos(t){t.startObject(2)}static addNodeIndices(t,e){t.addFieldOffset(0,e,0)}static createNodeIndicesVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addInt32(e[r]);return t.endVector()}static startNodeIndicesVector(t,e){t.startVector(4,e,4)}static addKernelDefHashes(t,e){t.addFieldOffset(1,e,0)}static createKernelDefHashesVector(t,e){t.startVector(8,e.length,8);for(let r=e.length-1;r>=0;r--)t.addInt64(e[r]);return t.endVector()}static startKernelDefHashesVector(t,e){t.startVector(8,e,8)}static endKernelCreateInfos(t){return t.endObject()}static createKernelCreateInfos(t,e,r){return l.startKernelCreateInfos(t),l.addNodeIndices(t,e),l.addKernelDefHashes(t,r),l.endKernelCreateInfos(t)}}f.KernelCreateInfos=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsSubGraphSessionState(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsSubGraphSessionState(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}graphId(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}sessionState(t){let e=this.bb.__offset(this.bb_pos,6);return e?(t||new s.experimental.fbs.SessionState).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}static startSubGraphSessionState(t){t.startObject(2)}static addGraphId(t,e){t.addFieldOffset(0,e,0)}static addSessionState(t,e){t.addFieldOffset(1,e,0)}static endSubGraphSessionState(t){let e=t.endObject();return t.requiredField(e,4),e}static createSubGraphSessionState(t,e,r){return l.startSubGraphSessionState(t),l.addGraphId(t,e),l.addSessionState(t,r),l.endSubGraphSessionState(t)}}f.SubGraphSessionState=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsSessionState(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsSessionState(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}kernels(t){let e=this.bb.__offset(this.bb_pos,4);return e?(t||new s.experimental.fbs.KernelCreateInfos).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}subGraphSessionStates(t,e){let r=this.bb.__offset(this.bb_pos,6);return r?(e||new s.experimental.fbs.SubGraphSessionState).__init(this.bb.__indirect(this.bb.__vector(this.bb_pos+r)+4*t),this.bb):null}subGraphSessionStatesLength(){let t=this.bb.__offset(this.bb_pos,6);return t?this.bb.__vector_len(this.bb_pos+t):0}static startSessionState(t){t.startObject(2)}static addKernels(t,e){t.addFieldOffset(0,e,0)}static addSubGraphSessionStates(t,e){t.addFieldOffset(1,e,0)}static createSubGraphSessionStatesVector(t,e){t.startVector(4,e.length,4);for(let r=e.length-1;r>=0;r--)t.addOffset(e[r]);return t.endVector()}static startSubGraphSessionStatesVector(t,e){t.startVector(4,e,4)}static endSessionState(t){return t.endObject()}static createSessionState(t,e,r){return l.startSessionState(t),l.addKernels(t,e),l.addSubGraphSessionStates(t,r),l.endSessionState(t)}}f.SessionState=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={})),function(s){(function(h){(function(f){class l{constructor(){this.bb=null,this.bb_pos=0}__init(t,e){return this.bb_pos=t,this.bb=e,this}static getRootAsInferenceSession(t,e){return(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static getSizePrefixedRootAsInferenceSession(t,e){return t.setPosition(t.position()+u.flatbuffers.SIZE_PREFIX_LENGTH),(e||new l).__init(t.readInt32(t.position())+t.position(),t)}static bufferHasIdentifier(t){return t.__has_identifier("ORTM")}ortVersion(t){let e=this.bb.__offset(this.bb_pos,4);return e?this.bb.__string(this.bb_pos+e,t):null}model(t){let e=this.bb.__offset(this.bb_pos,6);return e?(t||new s.experimental.fbs.Model).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}sessionState(t){let e=this.bb.__offset(this.bb_pos,8);return e?(t||new s.experimental.fbs.SessionState).__init(this.bb.__indirect(this.bb_pos+e),this.bb):null}static startInferenceSession(t){t.startObject(3)}static addOrtVersion(t,e){t.addFieldOffset(0,e,0)}static addModel(t,e){t.addFieldOffset(1,e,0)}static addSessionState(t,e){t.addFieldOffset(2,e,0)}static endInferenceSession(t){return t.endObject()}static finishInferenceSessionBuffer(t,e){t.finish(e,"ORTM")}static finishSizePrefixedInferenceSessionBuffer(t,e){t.finish(e,"ORTM",!0)}static createInferenceSession(t,e,r,i){return l.startInferenceSession(t),l.addOrtVersion(t,e),l.addModel(t,r),l.addSessionState(t,i),l.endInferenceSession(t)}}f.InferenceSession=l})(h.fbs||(h.fbs={}))})(s.experimental||(s.experimental={}))}(n.onnxruntime||(n.onnxruntime={}))},7448:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.OnnxjsSessionHandler=void 0;const u=a(1670),c=a(9162);n.OnnxjsSessionHandler=class{constructor(p){this.session=p,this.inputNames=this.session.inputNames,this.outputNames=this.session.outputNames}async dispose(){}async run(p,s,h){const f=new Map;for(const t in p)if(Object.hasOwnProperty.call(p,t)){const e=p[t];f.set(t,new c.Tensor(e.dims,e.type,void 0,void 0,e.data))}const l=await this.session.run(f),o={};return l.forEach((t,e)=>{o[e]=new u.Tensor(t.type,t.data,t.dims)}),o}startProfiling(){this.session.startProfiling()}endProfiling(){this.session.endProfiling()}}},6919:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.Session=void 0;const u=a(7067),c=a(1296),p=a(7091),s=a(1036),h=a(6231),f=a(2644);n.Session=class{constructor(l={}){this._initialized=!1,this.backendHint=l.backendHint,this.profiler=h.Profiler.create(l.profiler),this.context={profiler:this.profiler,graphInputTypes:[],graphInputDims:[]}}get inputNames(){return this._model.graph.getInputNames()}get outputNames(){return this._model.graph.getOutputNames()}startProfiling(){this.profiler.start()}endProfiling(){this.profiler.stop()}async loadModel(l,o,t){await this.profiler.event("session","Session.loadModel",async()=>{const e=await(0,p.resolveBackend)(this.backendHint);if(this.sessionHandler=e.createSessionHandler(this.context),this._model=new f.Model,typeof l=="string"){const r=l.endsWith(".ort");if(typeof fetch>"u"){const i=await(0,c.promisify)(u.readFile)(l);this.initialize(i,r)}else{const i=await fetch(l),d=await i.arrayBuffer();this.initialize(new Uint8Array(d),r)}}else if(ArrayBuffer.isView(l))this.initialize(l);else{const r=new Uint8Array(l,o||0,t||l.byteLength);this.initialize(r)}})}initialize(l,o){if(this._initialized)throw new Error("already initialized");this.profiler.event("session","Session.initialize",()=>{const t=this.sessionHandler.transformGraph?this.sessionHandler:void 0;this._model.load(l,t,o),this.sessionHandler.onGraphInitialized&&this.sessionHandler.onGraphInitialized(this._model.graph),this.initializeOps(this._model.graph),this._executionPlan=new s.ExecutionPlan(this._model.graph,this._ops,this.profiler)}),this._initialized=!0}async run(l){if(!this._initialized)throw new Error("session not initialized yet");return this.profiler.event("session","Session.run",async()=>{const o=this.normalizeAndValidateInputs(l),t=await this._executionPlan.execute(this.sessionHandler,o);return this.createOutput(t)})}normalizeAndValidateInputs(l){const o=this._model.graph.getInputNames();if(Array.isArray(l)){if(l.length!==o.length)throw new Error(`incorrect input array length: expected ${o.length} but got ${l.length}`)}else{if(l.size!==o.length)throw new Error(`incorrect input map size: expected ${o.length} but got ${l.size}`);const t=new Array(l.size);let e=0;for(let r=0;r<o.length;++r){const i=l.get(o[r]);if(!i)throw new Error(`missing input tensor for: '${name}'`);t[e++]=i}l=t}if(this.context.graphInputTypes&&this.context.graphInputTypes.length!==0&&this.context.graphInputDims&&this.context.graphInputDims.length!==0)this.validateInputTensorDims(this.context.graphInputDims,l,!1);else{const t=this._model.graph.getInputIndices(),e=this._model.graph.getValues(),r=new Array(t.length);for(let i=0;i<t.length;++i){const d=e[t[i]];r[i]=d.type.shape.dims,this.context.graphInputTypes.push(d.type.tensorType),this.context.graphInputDims.push(l[i].dims)}this.validateInputTensorDims(r,l,!0)}return this.validateInputTensorTypes(this.context.graphInputTypes,l),l}validateInputTensorTypes(l,o){for(let t=0;t<o.length;t++){const e=l[t],r=o[t].type;if(e!==r)throw new Error(`input tensor[${t}] check failed: expected type '${e}' but got ${r}`)}}validateInputTensorDims(l,o,t){for(let e=0;e<o.length;e++){const r=l[e],i=o[e].dims;if(!this.compareTensorDims(r,i,t))throw new Error(`input tensor[${e}] check failed: expected shape '[${r.join(",")}]' but got [${i.join(",")}]`)}}compareTensorDims(l,o,t){if(l.length!==o.length)return!1;for(let e=0;e<l.length;++e)if(l[e]!==o[e]&&(!t||l[e]!==0))return!1;return!0}createOutput(l){const o=this._model.graph.getOutputNames();if(l.length!==o.length)throw new Error("expected number of outputs do not match number of generated outputs");const t=new Map;for(let e=0;e<o.length;++e)t.set(o[e],l[e]);return t}initializeOps(l){const o=l.getNodes();this._ops=new Array(o.length);for(let t=0;t<o.length;t++)this._ops[t]=this.sessionHandler.resolve(o[t],this._model.opsets,l)}}},9162:function(y,n,a){var u=this&&this.__importDefault||function(d){return d&&d.__esModule?d:{default:d}};Object.defineProperty(n,"__esModule",{value:!0}),n.Tensor=void 0;const c=a(3442),p=u(a(3720)),s=a(1446),h=a(9395),f=a(2517);var l=h.onnxruntime.experimental.fbs;class o{get data(){if(this.cache===void 0){const g=this.dataProvider(this.dataId);if(g.length!==this.size)throw new Error("Length of data provided by the Data Provider is inconsistent with the dims of this Tensor.");this.cache=g}return this.cache}get stringData(){if(this.type!=="string")throw new TypeError("data type is not string");return this.data}get integerData(){switch(this.type){case"uint8":case"int8":case"uint16":case"int16":case"int32":case"uint32":case"bool":return this.data;default:throw new TypeError("data type is not integer (uint8, int8, uint16, int16, int32, uint32, bool)")}}get floatData(){switch(this.type){case"float32":case"float64":return this.data;default:throw new TypeError("data type is not float (float32, float64)")}}get numberData(){if(this.type!=="string")return this.data;throw new TypeError("type cannot be non-number (string)")}get(g){return this.data[f.ShapeUtil.indicesToOffset(g,this.strides)]}set(g,m){this.data[f.ShapeUtil.indicesToOffset(g,this.strides)]=m}async getData(){return this.cache===void 0&&(this.cache=await this.asyncDataProvider(this.dataId)),this.cache}get strides(){return this._strides||(this._strides=f.ShapeUtil.computeStrides(this.dims)),this._strides}constructor(g,m,b,_,v,w=c.Guid.create()){this.dims=g,this.type=m,this.dataProvider=b,this.asyncDataProvider=_,this.cache=v,this.dataId=w,this.size=f.ShapeUtil.validateDimsAndCalcSize(g);const S=this.size,A=b===void 0&&_===void 0&&v===void 0;if(v!==void 0&&v.length!==S)throw new RangeError("Input dims doesn't match data length.");if(m==="string"){if(!(v===void 0||Array.isArray(v)&&v.every(O=>typeof O=="string")))throw new TypeError("cache should be a string array");A&&(this.cache=new Array(S))}else{if(v!==void 0){const O=e(m);if(!(v instanceof O))throw new TypeError(`cache should be type ${O.name}`)}if(A){const O=new ArrayBuffer(S*function(x){switch(x){case"bool":case"int8":case"uint8":return 1;case"int16":case"uint16":return 2;case"int32":case"uint32":case"float32":return 4;case"float64":return 8;default:throw new Error(`cannot calculate sizeof() on type ${x}`)}}(m));this.cache=function(x,I){return new(e(I))(x)}(O,m)}}}static fromProto(g){if(!g)throw new Error("cannot construct Value from an empty tensor");const m=f.ProtoUtil.tensorDataTypeFromProto(g.dataType),b=f.ProtoUtil.tensorDimsFromProto(g.dims),_=new o(b,m);if(m==="string")g.stringData.forEach((v,w)=>{_.data[w]=(0,f.decodeUtf8String)(v)});else if(g.rawData&&typeof g.rawData.byteLength=="number"&&g.rawData.byteLength>0){const v=_.data,w=new DataView(g.rawData.buffer,g.rawData.byteOffset,g.rawData.byteLength),S=t(g.dataType),A=g.rawData.byteLength/S;if(g.rawData.byteLength%S!=0)throw new Error("invalid buffer length");if(v.length!==A)throw new Error("buffer length mismatch");for(let O=0;O<A;O++){const x=i(w,g.dataType,O*S);v[O]=x}}else{let v;switch(g.dataType){case s.onnx.TensorProto.DataType.FLOAT:v=g.floatData;break;case s.onnx.TensorProto.DataType.INT32:case s.onnx.TensorProto.DataType.INT16:case s.onnx.TensorProto.DataType.UINT16:case s.onnx.TensorProto.DataType.INT8:case s.onnx.TensorProto.DataType.UINT8:case s.onnx.TensorProto.DataType.BOOL:v=g.int32Data;break;case s.onnx.TensorProto.DataType.INT64:v=g.int64Data;break;case s.onnx.TensorProto.DataType.DOUBLE:v=g.doubleData;break;case s.onnx.TensorProto.DataType.UINT32:case s.onnx.TensorProto.DataType.UINT64:v=g.uint64Data;break;default:throw new Error("unspecific error")}if(v==null)throw new Error("failed to populate data from a tensorproto value");const w=_.data;if(w.length!==v.length)throw new Error("array length mismatch");for(let S=0;S<v.length;S++){const A=v[S];p.default.isLong(A)?w[S]=r(A,g.dataType):w[S]=A}}return _}static fromData(g,m,b){return new o(m,b,void 0,void 0,g)}static fromOrtTensor(g){if(!g)throw new Error("cannot construct Value from an empty tensor");const m=f.ProtoUtil.tensorDimsFromORTFormat(g),b=f.ProtoUtil.tensorDataTypeFromProto(g.dataType()),_=new o(m,b);if(b==="string")for(let v=0;v<g.stringDataLength();v++)_.data[v]=g.stringData(v);else if(g.rawDataArray()&&typeof g.rawDataLength()=="number"&&g.rawDataLength()>0){const v=_.data,w=new DataView(g.rawDataArray().buffer,g.rawDataArray().byteOffset,g.rawDataLength()),S=t(g.dataType()),A=g.rawDataLength()/S;if(g.rawDataLength()%S!=0)throw new Error("invalid buffer length");if(v.length!==A)throw new Error("buffer length mismatch");for(let O=0;O<A;O++){const x=i(w,g.dataType(),O*S);v[O]=x}}return _}}function t(d){switch(d){case s.onnx.TensorProto.DataType.UINT8:case s.onnx.TensorProto.DataType.INT8:case s.onnx.TensorProto.DataType.BOOL:return 1;case s.onnx.TensorProto.DataType.UINT16:case s.onnx.TensorProto.DataType.INT16:return 2;case s.onnx.TensorProto.DataType.FLOAT:case s.onnx.TensorProto.DataType.INT32:case s.onnx.TensorProto.DataType.UINT32:return 4;case s.onnx.TensorProto.DataType.INT64:case s.onnx.TensorProto.DataType.DOUBLE:case s.onnx.TensorProto.DataType.UINT64:return 8;default:throw new Error(`cannot calculate sizeof() on type ${s.onnx.TensorProto.DataType[d]}`)}}function e(d){switch(d){case"bool":case"uint8":return Uint8Array;case"int8":return Int8Array;case"int16":return Int16Array;case"uint16":return Uint16Array;case"int32":return Int32Array;case"uint32":return Uint32Array;case"float32":return Float32Array;case"float64":return Float64Array;default:throw new Error("unspecified error")}}function r(d,g){if(g===s.onnx.TensorProto.DataType.INT64||g===l.TensorDataType.INT64){if(d.greaterThanOrEqual(2147483648)||d.lessThan(-2147483648))throw new TypeError("int64 is not supported")}else{if(g!==s.onnx.TensorProto.DataType.UINT32&&g!==l.TensorDataType.UINT32&&g!==s.onnx.TensorProto.DataType.UINT64&&g!==l.TensorDataType.UINT64)throw new TypeError(`not a LONG type: ${s.onnx.TensorProto.DataType[g]}`);if(d.greaterThanOrEqual(4294967296)||d.lessThan(0))throw new TypeError("uint64 is not supported")}return d.toNumber()}function i(d,g,m){switch(g){case s.onnx.TensorProto.DataType.BOOL:case s.onnx.TensorProto.DataType.UINT8:return d.getUint8(m);case s.onnx.TensorProto.DataType.INT8:return d.getInt8(m);case s.onnx.TensorProto.DataType.UINT16:return d.getUint16(m,!0);case s.onnx.TensorProto.DataType.INT16:return d.getInt16(m,!0);case s.onnx.TensorProto.DataType.FLOAT:return d.getFloat32(m,!0);case s.onnx.TensorProto.DataType.INT32:return d.getInt32(m,!0);case s.onnx.TensorProto.DataType.UINT32:return d.getUint32(m,!0);case s.onnx.TensorProto.DataType.INT64:return r(p.default.fromBits(d.getUint32(m,!0),d.getUint32(m+4,!0),!1),g);case s.onnx.TensorProto.DataType.DOUBLE:return d.getFloat64(m,!0);case s.onnx.TensorProto.DataType.UINT64:return r(p.default.fromBits(d.getUint32(m,!0),d.getUint32(m+4,!0),!0),g);default:throw new Error(`cannot read from DataView for type ${s.onnx.TensorProto.DataType[g]}`)}}n.Tensor=o},2517:function(y,n,a){var u=this&&this.__importDefault||function(g){return g&&g.__esModule?g:{default:g}};Object.defineProperty(n,"__esModule",{value:!0}),n.decodeUtf8String=n.MAX_CLIP=n.MIN_CLIP=n.PoolConvUtil=n.ReduceUtil=n.SplitUtil=n.MathUtil=n.ShapeUtil=n.LongUtil=n.ProtoUtil=n.GemmUtil=n.arrayCopyHelper=n.BroadcastUtil=n.MatMulUtil=n.ArrayUtil=n.assert=n.checkInputsShape=void 0;const c=a(5686),p=u(a(3720)),s=a(1446),h=a(9162);n.checkInputsShape=function(g,...m){if(!g||g.length!==m.length)return!1;for(let b=0;b<g.length;b++)if(!g[b].dims||g[b].dims.length!==m[b])return!1;return!0},n.assert=function(g,m){if(!g)throw new Error(typeof m=="string"?m:m())},n.ArrayUtil=class{static arraysEqual(g,m){if(g.length!==m.length)return!1;for(let b=0;b<g.length;b++)if(g[b]!==m[b])return!1;return!0}};class f{static preprocessInputShapes(m,b){return[m.length===1?[1,m[0]]:m,b.length===1?[b[0],1]:b]}static postprocessOutputShape(m,b,_){b===1&&m.splice(m.length-2,1),_===1&&m.pop()}static calcMatMulShape(m,b){return m[1]!==b[0]?void 0:[m[0],b[1]]}}n.MatMulUtil=f;class l{static calcShape(m,b,_=!1){const v=m.length,w=b.length;if(v===0)return b;if(w===0)return m;const S=Math.max(m.length,b.length),A=new Array(S);if(_){if(v<2||w<2)return;const O=f.calcMatMulShape([m[v-2],m[v-1]],[b[w-2],b[w-1]]);if(O===void 0)return;[A[S-2],A[S-1]]=O}for(let O=_?3:1;O<=S;O++){const x=v-O<0?1:m[v-O],I=w-O<0?1:b[w-O];if(x!==I&&x>1&&I>1)return;A[S-O]=Math.max(x,I)}return A}static index(m,b){const _=new Array(b.length);return l.fillIndex(m,b,_),_}static fillIndex(m,b,_){const v=m.length-b.length;for(let w=0;w<b.length;w++)_[w]=m[v+w]%b[w]}static calc(m,b,_,v,w){const S=l.calcShape(m.dims,b.dims);if(S){if(v&&!e.areEqual(S,m.dims))return;const A=e.size(S),O=v?m:new h.Tensor(S,w||m.type);if(S.length===0)O.set([],_(m.get([]),b.get([])));else{const x=new Array(S.length),I=new Array(m.dims.length),N=new Array(b.dims.length);let B,L=0,F=0,H=!1,D=!1;m.dims.length===0&&(L=m.get([]),H=!0),b.dims.length===0&&(F=b.get([]),D=!0);for(let j=0;j<A;j++){B=j;for(let Z=S.length-1;Z>=0;Z--)x[Z]=B%S[Z],B=Math.floor(B/S[Z]);H||(l.fillIndex(x,m.dims,I),L=m.get(I)),D||(l.fillIndex(x,b.dims,N),F=b.get(N)),O.set(x,_(L,F))}}return O}}static isValidBroadcast(m,b){const _=m.length,v=b.length;if(_>v)return!1;for(let w=1;w<=_;w++)if(m[_-w]!==1&&m[_-w]!==b[v-w])return!1;return!0}static getBroadcastDims(m,b){const _=m.length,v=[];for(let w=0;w<_;w++){const S=_-1-w,A=m[S]||1;(b[b.length-1-w]||1)>1&&A===1&&v.unshift(S)}return v}}n.BroadcastUtil=l,n.arrayCopyHelper=function(g,m,b,_,v){if(_<0||_>=m.length)throw new Error("sourceIndex out of bounds");if(b<0||b>=g.length)throw new Error("targetIndex out of bounds");if(_+v>m.length)throw new Error("source indices to be copied are outside bounds");if(b+v>g.length)throw new Error("target array is too small to hold result");for(let w=0;w<v;w++)g[b+w]=m[_+w]},n.GemmUtil=class{static getShapeOfGemmResult(g,m,b,_,v){if(g.length!==2||b.length!==2)throw new Error("shape need to be of size 2");let w,S,A;m?(w=g[1],S=g[0]):(w=g[0],S=g[1]);let O=-1;if(_?(A=b[0],O=1):(A=b[1],O=0),b[O]!==S)throw new Error("dimension mismatch");if(w<=0||A<=0||S<=0)throw new Error("invalid shape specified");if(v&&!l.isValidBroadcast(v,[w,A]))throw new Error("gemm: invalid bias shape for broadcast");return[w,A,S]}};class o{static tensorDataTypeFromProto(m){switch(m){case s.onnx.TensorProto.DataType.INT8:return"int8";case s.onnx.TensorProto.DataType.UINT8:return"uint8";case s.onnx.TensorProto.DataType.BOOL:return"bool";case s.onnx.TensorProto.DataType.INT16:return"int16";case s.onnx.TensorProto.DataType.UINT16:return"uint16";case s.onnx.TensorProto.DataType.INT32:return"int32";case s.onnx.TensorProto.DataType.UINT32:return"uint32";case s.onnx.TensorProto.DataType.FLOAT:return"float32";case s.onnx.TensorProto.DataType.DOUBLE:return"float64";case s.onnx.TensorProto.DataType.STRING:return"string";case s.onnx.TensorProto.DataType.INT64:return"int32";case s.onnx.TensorProto.DataType.UINT64:return"uint32";default:throw new Error(`unsupported data type: ${s.onnx.TensorProto.DataType[m]}`)}}static tensorDataTypeStringToEnum(m){switch(m){case"int8":return s.onnx.TensorProto.DataType.INT8;case"uint8":return s.onnx.TensorProto.DataType.UINT8;case"bool":return s.onnx.TensorProto.DataType.BOOL;case"int16":return s.onnx.TensorProto.DataType.INT16;case"uint16":return s.onnx.TensorProto.DataType.UINT16;case"int32":return s.onnx.TensorProto.DataType.INT32;case"uint32":return s.onnx.TensorProto.DataType.UINT32;case"float32":return s.onnx.TensorProto.DataType.FLOAT;case"float64":return s.onnx.TensorProto.DataType.DOUBLE;case"string":return s.onnx.TensorProto.DataType.STRING;case"int64":return s.onnx.TensorProto.DataType.INT64;case"uint64":return s.onnx.TensorProto.DataType.UINT64;default:throw new Error(`unsupported data type: ${m}`)}}static tensorDimsFromProto(m){return m.map(b=>p.default.isLong(b)?b.toNumber():b)}static tensorValueTypeFromProto(m){return{tensorType:o.tensorDataTypeFromProto(m.elemType),shape:{dims:o.tensorDimsFromProto(m.shape.dim.map(b=>b.dimValue))}}}static tensorDimsFromORTFormat(m){const b=[];for(let _=0;_<m.dimsLength();_++)b.push(t.longToNumber(m.dims(_)));return b}static tensorAttributesFromORTFormat(m){const b=[];for(let _=0;_<m.attributesLength();_++)b.push(m.attributes(_));return b}}n.ProtoUtil=o;class t{static longToNumber(m,b){return p.default.isLong(m)?m.toNumber():m instanceof c.flatbuffers.Long?p.default.fromValue({low:m.low,high:m.high,unsigned:b!=null&&b}).toNumber():m}static isLong(m){return p.default.isLong(m)||m instanceof c.flatbuffers.Long}}n.LongUtil=t;class e{static size(m){return e.getSizeFromDimensionRange(m,0,m.length)}static sizeFromDimension(m,b){if(b<0||b>m.length)throw new Error(`invalid dimension of ${b} for sizeFromDimension as Tensor has ${m.length} dimensions.`);return e.getSizeFromDimensionRange(m,b,m.length)}static sizeToDimension(m,b){if(b<0||b>m.length)throw new Error(`invalid dimension of ${b} for sizeToDimension as Tensor has ${m.length} dimensions.`);return e.getSizeFromDimensionRange(m,0,b)}static getSizeFromDimensionRange(m,b,_){let v=1;for(let w=b;w<_;w++){if(m[w]<=0)throw new Error("cannot get valid size from specified dimension range. Most likely the range contains 0 or negative values in them.");v*=m[w]}return v}static computeStrides(m){const b=m.length;if(b===0)return[];if(b===1)return[1];const _=new Array(b);_[b-1]=1,_[b-2]=m[b-1];for(let v=b-3;v>=0;--v)_[v]=_[v+1]*m[v+1];return _}static transpose(m){return m.slice().reverse()}static indicesToOffset(m,b,_){_===void 0&&(_=m.length);let v=0;for(let w=0;w<_;++w)v+=b[w]*m[w];return v}static offsetToIndices(m,b){const _=b.length;if(_===0)return[];if(_===1)return[m*b[0]];const v=new Array(b.length);for(let w=0;w<v.length-1;++w)v[w]=Math.floor(m/b[w]),m-=v[w]*b[w];return v[v.length-1]=m,v}static normalizeAxis(m,b){if(m<-b&&m>=b)throw new Error("unsupported axis for this operation.");return m<0?m+b:m}static normalizeAxes(m,b){return m.map(_=>this.normalizeAxis(_,b))}static incrementIndex(m,b,_){if(b.length===0||m.length===0)throw new Error("Index incrementing unsupported for scalar Tensor");if(_===void 0)_=b.length;else if(_<=0||_>b.length)throw new Error("Incorrect axis to increment on");for(let v=_-1;v>=0&&(m[v]++,!(m[v]<b[v]));--v)m[v]=0}static calculateReshapedDims(m,b){if(b.length===0){if(m.length===0||e.size(m)===1)return[];throw new Error("cannot reshape to a scalar Tensor")}const _=b.length,v=new Array(_);let w=-1,S=1;for(let O=0;O<_;O++){if(b[O]<-1)throw new Error("a dimension in shape hints cannot be less than -1");if(b[O]===-1){if(w!==-1)throw new Error("at most one dimension in shape hints can be -1");w=O}else{if(b[O]===0){if(O>=m.length)throw new Error("the dimension with value zero exceeds the dimension size of the input tensor");v[O]=m[O]}else v[O]=b[O];S*=v[O]}}const A=e.size(m);if(w!==-1){if(A%S!=0)throw new Error(`the input tensor cannot be reshaped to the requested shape. Input shape: [${m}] Output shape: [${b}]`);v[w]=A/S}else if(S!==A)throw new Error("reshapedDims and originalDims don't have matching sizes");return v}static sortBasedOnPerm(m,b){return b?b.map(_=>m[_]):m.slice().reverse()}static padShape(m,b){const _=m.length;return m.map((v,w)=>v+b[w]+b[w+_])}static areEqual(m,b){return m.length===b.length&&m.every((_,v)=>_===b[v])}static validateDimsAndCalcSize(m){if(m.length>6)throw new TypeError("Only rank 0 to 6 is supported for tensor shape.");let b=1;for(const _ of m){if(!Number.isInteger(_))throw new TypeError(`Invalid shape: ${_} is not an integer`);if(_<0||_>2147483647)throw new TypeError(`Invalid shape: length ${_} is not allowed`);b*=_}return b}static flattenShape(m,b){b<0&&(b+=m.length);const _=m.reduce((w,S)=>w*S,1),v=m.slice(b).reduce((w,S)=>w*S,1);return[_/v,v]}static squeezeShape(m,b){const _=new Array;b=e.normalizeAxes(b,m.length);for(let v=0;v<m.length;v++){const w=b.indexOf(v)>=0;if(w&&m[v]!==1)throw new Error("squeeze an axis of size different than 1");(b.length===0&&m[v]>1||b.length>0&&!w)&&_.push(m[v])}return _}static unsqueezeShape(m,b){const _=new Array(m.length+b.length);_.fill(0);for(let w=0;w<b.length;w++){const S=e.normalizeAxis(b[w],_.length);if(S>=_.length)throw new Error("'axes' has an out of range axis");if(_[S]!==0)throw new Error("'axes' has a duplicate axis");_[S]=1}let v=0;for(let w=0;w<_.length;w++)_[w]===0&&(_[w]=m[v++]);if(v!==m.length)throw new Error("the unsqueezed dimension could not be established");return _}}n.ShapeUtil=e,n.MathUtil=class{static sqr(g,m,b,_,v){if(_<0||_>=m.length)throw new Error("sourceIndex out of bounds");if(b<0||b>=g.length)throw new Error("targetIndex out of bounds");if(_+v>m.length)throw new Error("source indices to be copied are outside bounds");if(b+v>g.length)throw new Error("target array is too small to hold result");for(let w=0;w<v;w++)g[b+w]+=Math.pow(m[_+w],2)}static axpy(g,m,b,_,v,w){if(_<0||_>=m.length)throw new Error("sourceIndex out of bounds");if(b<0||b>=g.length)throw new Error("targetIndex out of bounds");if(_+v>m.length)throw new Error("source indices to be copied are outside bounds");if(b+v>g.length)throw new Error("target array is too small to hold result");for(let S=0;S<v;S++)g[b+S]+=w*m[_+S]}static powx(g,m,b,_,v,w){if(_<0||_>=m.length)throw new Error("sourceIndex out of bounds");if(b<0||b>=g.length)throw new Error("targetIndex out of bounds");if(_+v>m.length)throw new Error("source indices to be copied are outside bounds");if(b+v>g.length)throw new Error("target array is too small to hold result");for(let S=0;S<v;S++)g[b+S]=Math.pow(m[_+S],w)}static mul(g,m,b,_,v){if(_<0||_>=m.length)throw new Error("sourceIndex out of bounds");if(b<0||b>=g.length)throw new Error("targetIndex out of bounds");if(_+v>m.length)throw new Error("source indices to be copied are outside bounds");if(b+v>g.length)throw new Error("target array is too small to hold result");for(let w=0;w<v;w++)g[b+w]=m[_+w]*g[b+w]}};class r{static splitShape(m,b,_,v){if(_.length===0){if(!v)throw new Error("need to know number of outputs when the 'split' attribute is not specified");r.determineSplit(m[b],v,_)}const w=[],S=[0];for(let A=0;A<_.length;++A){A!==0&&S.push(S[A-1]+_[A-1]);const O=m.slice();O[b]=_[A],w.push(O)}return[w,S]}static determineSplit(m,b,_){if(m%b!=0)throw new Error("cannot split tensor to equal sized parts");for(let v=0;v<b;++v)_.push(m/b)}}n.SplitUtil=r;class i{static calcReduce(m,b,_,v,w){const S=m.dims.slice(0);b.length===0&&S.forEach((L,F)=>b.push(F));const A=i.calcReduceShape(S,b,!0),O=e.size(A),x=new h.Tensor(A,m.type),I=e.computeStrides(A),N=e.computeStrides(S),B=new Array(S.length);for(let L=0;L<O;L++){const F=e.offsetToIndices(L,I);l.fillIndex(F,S,B),x.set(F,i.calcReduceByAxis(m.numberData,b,S,0,e.indicesToOffset(B,N),v,w))}return _?x:new h.Tensor(i.calcReduceShape(S,b,_),x.type,void 0,void 0,x.data,x.dataId)}static calcReduceByAxis(m,b,_,v,w,S,A){let O=0;if(v>=b.length)return S(m[w]);const x=b[v],I=x>=_.length?1:e.size(_.slice(x+1));for(let N=0;N<_[x];N++)O=N===0?i.calcReduceByAxis(m,b,_,v+1,w,S,A):A(O,i.calcReduceByAxis(m,b,_,v+1,w,S,A)),w+=I;return O}static calcReduceShape(m,b,_){const v=m.slice();for(let w=0;w<b.length;w++)v[b[w]]=_?1:0;return v.filter(w=>w!==0)}}n.ReduceUtil=i;class d{static adjustPoolAttributes(m,b,_,v,w,S){if(!m&&_.length!==b.length-2)throw new Error("length of specified kernel shapes should be 2 less than length of input dimensions");if(m)for(let A=0;A<b.length-2;A++)A>=_.length?_.push(b[A+2]):_[A]=b[A+2];for(let A=0;A<_.length;A++)if(A<v.length){if(v[A]<0)throw new Error("strides should be greater than or equal to 1")}else v.push(1);for(let A=0;A<_.length;A++)if(A<w.length){if(w[A]<0)throw new Error("dilations should be greater than or equal to 1")}else w.push(1);for(let A=0;A<2*_.length;A++)if(A<S.length){if(S[A]<0)throw new Error("pad should be greater than or equal to 1")}else S.push(0);for(let A=0;A<_.length;A++){if(_[A]<=0)throw new Error("kernel shapes need to be greater than 0");if(S[A]>=_[A]||S[A+_.length]>=_[A])throw new Error("pads should be smaller than kernel")}}static adjustPadsBasedOnAutoPad(m,b,_,v,w,S){if(S){if(w.length!==2*(m.length-2))throw new Error("length of pads should be twice the length of data dimensions");if(b.length!==m.length-2)throw new Error("length of strides should be the length of data dimensions");if(v.length!==m.length-2)throw new Error("length of kernel shapes should be the length of data dimensions");for(let A=0;A<m.length-2;A++)d.adjustPadAndReturnShape(m[A+2],b[A],_[A],v[A],w,A,A+m.length-2,S)}}static computePoolOutputShape(m,b,_,v,w,S,A){if(b.length<=0)throw new Error("input shape must be of size greater than 0");const O=[b[0],b[1]];return d.computeShapeHelper(m,b,O,_,v,w,S,A),O}static computeConvOutputShape(m,b,_,v,w,S,A){if(m.length<=0||b.length<=0)throw new Error("invalid input tensor dims or invalid filter tensor dims");const O=[m[0],b[0]];return d.computeShapeHelper(!1,m,O,_,v,w,S,A),O}static computeShapeHelper(m,b,_,v,w,S,A,O){if(m)for(let x=0;x<b.length-2;x++)_.push(1);else for(let x=0;x<b.length-2;x++)_.push(d.adjustPadAndReturnShape(b[x+2],v[x],w[x],S[x],A,x,x+b.length-2,O))}static adjustPadAndReturnShape(m,b,_,v,w,S,A,O){const x=_*(v-1)+1;if(!O||O==="NOTSET")return Math.floor((m+w[S]+w[A]-x)/b+1);switch(O){case"VALID":return w[S]=0,w[A]=0,Math.floor((m-x)/b+1);case"SAME_LOWER":case"SAME_UPPER":if(_!==1)throw new Error("Dilation not supported for SAME_UPPER or SAME_LOWER");{const I=((m+b-1)/b-1)*b+v-m;return w[S]=Math.floor(O==="SAME_LOWER"?(I+1)/2:I/2),w[A]=I-w[S],Math.floor((m+I-v)/b+1)}default:throw new Error("Unsupported AutoPad type")}}}n.PoolConvUtil=d,n.MIN_CLIP=-34028234663852886e22,n.MAX_CLIP=34028234663852886e22,n.decodeUtf8String=function(g){return new TextDecoder().decode(g)}},7967:(y,n)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.iterateExtraOptions=void 0,n.iterateExtraOptions=(a,u,c,p)=>{if(typeof a=="object"&&a!==null){if(c.has(a))throw new Error("Circular reference in options");c.add(a)}Object.entries(a).forEach(([s,h])=>{const f=u?u+s:s;if(typeof h=="object")(0,n.iterateExtraOptions)(h,f+".",c,p);else if(typeof h=="string"||typeof h=="number")p(f,h.toString());else{if(typeof h!="boolean")throw new Error("Can't handle extra config type: "+typeof h);p(f,h?"1":"0")}})}},2157:function(y,n,a){var u,c=this&&this.__createBinding||(Object.create?function(I,N,B,L){L===void 0&&(L=B);var F=Object.getOwnPropertyDescriptor(N,B);F&&!("get"in F?!N.__esModule:F.writable||F.configurable)||(F={enumerable:!0,get:function(){return N[B]}}),Object.defineProperty(I,L,F)}:function(I,N,B,L){L===void 0&&(L=B),I[L]=N[B]}),p=this&&this.__setModuleDefault||(Object.create?function(I,N){Object.defineProperty(I,"default",{enumerable:!0,value:N})}:function(I,N){I.default=N}),s=this&&this.__importStar||function(I){if(I&&I.__esModule)return I;var N={};if(I!=null)for(var B in I)B!=="default"&&Object.prototype.hasOwnProperty.call(I,B)&&c(N,I,B);return p(N,I),N};Object.defineProperty(n,"__esModule",{value:!0}),n.endProfiling=n.run=n.releaseSession=n.createSession=n.createSessionFinalize=n.createSessionAllocate=n.initOrt=n.initWasm=void 0;const h=a(1670),f=s(a(349)),l=a(6361),o=()=>!!h.env.wasm.proxy&&typeof document<"u";let t,e,r,i=!1,d=!1,g=!1;const m=[],b=[],_=[],v=[],w=[],S=[],A=()=>{if(i||!d||g||!t)throw new Error("worker not ready")},O=I=>{switch(I.data.type){case"init-wasm":i=!1,I.data.err?(g=!0,e[1](I.data.err)):(d=!0,e[0]());break;case"init-ort":I.data.err?r[1](I.data.err):r[0]();break;case"create_allocate":I.data.err?m.shift()[1](I.data.err):m.shift()[0](I.data.out);break;case"create_finalize":I.data.err?b.shift()[1](I.data.err):b.shift()[0](I.data.out);break;case"create":I.data.err?_.shift()[1](I.data.err):_.shift()[0](I.data.out);break;case"release":I.data.err?v.shift()[1](I.data.err):v.shift()[0]();break;case"run":I.data.err?w.shift()[1](I.data.err):w.shift()[0](I.data.out);break;case"end-profiling":I.data.err?S.shift()[1](I.data.err):S.shift()[0]()}},x=typeof document<"u"?(u=document==null?void 0:document.currentScript)===null||u===void 0?void 0:u.src:void 0;n.initWasm=async()=>{if(o()){if(d)return;if(i)throw new Error("multiple calls to 'initWasm()' detected.");if(g)throw new Error("previous call to 'initWasm()' failed.");return i=!0,h.env.wasm.wasmPaths===void 0&&x&&x.indexOf("blob:")!==0&&(h.env.wasm.wasmPaths=x.substr(0,+x.lastIndexOf("/")+1)),new Promise((I,N)=>{t==null||t.terminate(),t=a(9710).Z(),t.onmessage=O,e=[I,N];const B={type:"init-wasm",in:h.env.wasm};t.postMessage(B)})}return(0,l.initializeWebAssembly)(h.env.wasm)},n.initOrt=async(I,N)=>{if(o())return A(),new Promise((B,L)=>{r=[B,L];const F={type:"init-ort",in:{numThreads:I,loggingLevel:N}};t.postMessage(F)});f.initOrt(I,N)},n.createSessionAllocate=async I=>o()?(A(),new Promise((N,B)=>{m.push([N,B]);const L={type:"create_allocate",in:{model:I}};t.postMessage(L,[I.buffer])})):f.createSessionAllocate(I),n.createSessionFinalize=async(I,N)=>o()?(A(),new Promise((B,L)=>{b.push([B,L]);const F={type:"create_finalize",in:{modeldata:I,options:N}};t.postMessage(F)})):f.createSessionFinalize(I,N),n.createSession=async(I,N)=>o()?(A(),new Promise((B,L)=>{_.push([B,L]);const F={type:"create",in:{model:I,options:N}};t.postMessage(F,[I.buffer])})):f.createSession(I,N),n.releaseSession=async I=>{if(o())return A(),new Promise((N,B)=>{v.push([N,B]);const L={type:"release",in:I};t.postMessage(L)});f.releaseSession(I)},n.run=async(I,N,B,L,F)=>o()?(A(),new Promise((H,D)=>{w.push([H,D]);const j={type:"run",in:{sessionId:I,inputIndices:N,inputs:B,outputIndices:L,options:F}};t.postMessage(j,f.extractTransferableBuffers(B))})):f.run(I,N,B,L,F),n.endProfiling=async I=>{if(o())return A(),new Promise((N,B)=>{S.push([N,B]);const L={type:"end-profiling",in:I};t.postMessage(L)});f.endProfiling(I)}},586:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.setRunOptions=void 0;const u=a(7967),c=a(4983),p=a(6361);n.setRunOptions=s=>{const h=(0,p.getInstance)();let f=0;const l=[],o=s||{};try{if((s==null?void 0:s.logSeverityLevel)===void 0)o.logSeverityLevel=2;else if(typeof s.logSeverityLevel!="number"||!Number.isInteger(s.logSeverityLevel)||s.logSeverityLevel<0||s.logSeverityLevel>4)throw new Error(`log serverity level is not valid: ${s.logSeverityLevel}`);if((s==null?void 0:s.logVerbosityLevel)===void 0)o.logVerbosityLevel=0;else if(typeof s.logVerbosityLevel!="number"||!Number.isInteger(s.logVerbosityLevel))throw new Error(`log verbosity level is not valid: ${s.logVerbosityLevel}`);(s==null?void 0:s.terminate)===void 0&&(o.terminate=!1);let t=0;if((s==null?void 0:s.tag)!==void 0&&(t=(0,c.allocWasmString)(s.tag,l)),f=h._OrtCreateRunOptions(o.logSeverityLevel,o.logVerbosityLevel,!!o.terminate,t),f===0)throw new Error("Can't create run options");return(s==null?void 0:s.extra)!==void 0&&(0,u.iterateExtraOptions)(s.extra,"",new WeakSet,(e,r)=>{const i=(0,c.allocWasmString)(e,l),d=(0,c.allocWasmString)(r,l);if(h._OrtAddRunConfigEntry(f,i,d)!==0)throw new Error(`Can't set a run config entry: ${e} - ${r}`)}),[f,l]}catch(t){throw f!==0&&h._OrtReleaseRunOptions(f),l.forEach(h._free),t}}},2306:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.OnnxruntimeWebAssemblySessionHandler=void 0;const u=a(2806),c=a(1670),p=a(2850),s=a(2157);let h;n.OnnxruntimeWebAssemblySessionHandler=class{async createSessionAllocate(f){const l=await fetch(f),o=await l.arrayBuffer();return(0,s.createSessionAllocate)(new Uint8Array(o))}async loadModel(f,l){if(h||(await(0,s.initOrt)(c.env.wasm.numThreads,(o=>{switch(o){case"verbose":return 0;case"info":return 1;case"warning":return 2;case"error":return 3;case"fatal":return 4;default:throw new Error(`unsupported logging level: ${o}`)}})(c.env.logLevel)),h=!0),typeof f=="string")if(typeof fetch>"u"){const o=await(0,p.promisify)(u.readFile)(f);[this.sessionId,this.inputNames,this.outputNames]=await(0,s.createSession)(o,l)}else{const o=await this.createSessionAllocate(f);[this.sessionId,this.inputNames,this.outputNames]=await(0,s.createSessionFinalize)(o,l)}else[this.sessionId,this.inputNames,this.outputNames]=await(0,s.createSession)(f,l)}async dispose(){return(0,s.releaseSession)(this.sessionId)}async run(f,l,o){const t=[],e=[];Object.entries(f).forEach(g=>{const m=g[0],b=g[1],_=this.inputNames.indexOf(m);if(_===-1)throw new Error(`invalid input '${m}'`);t.push(b),e.push(_)});const r=[];Object.entries(l).forEach(g=>{const m=g[0],b=this.outputNames.indexOf(m);if(b===-1)throw new Error(`invalid output '${m}'`);r.push(b)});const i=await(0,s.run)(this.sessionId,e,t.map(g=>[g.type,g.dims,g.data]),r,o),d={};for(let g=0;g<i.length;g++)d[this.outputNames[r[g]]]=new c.Tensor(i[g][0],i[g][2],i[g][1]);return d}startProfiling(){}endProfiling(){(0,s.endProfiling)(this.sessionId)}}},4919:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.setSessionOptions=void 0;const u=a(7967),c=a(4983),p=a(6361);n.setSessionOptions=s=>{const h=(0,p.getInstance)();let f=0;const l=[],o=s||{};(t=>{t.extra||(t.extra={}),t.extra.session||(t.extra.session={});const e=t.extra.session;e.use_ort_model_bytes_directly||(e.use_ort_model_bytes_directly="1")})(o);try{(s==null?void 0:s.graphOptimizationLevel)===void 0&&(o.graphOptimizationLevel="all");const t=(i=>{switch(i){case"disabled":return 0;case"basic":return 1;case"extended":return 2;case"all":return 99;default:throw new Error(`unsupported graph optimization level: ${i}`)}})(o.graphOptimizationLevel);(s==null?void 0:s.enableCpuMemArena)===void 0&&(o.enableCpuMemArena=!0),(s==null?void 0:s.enableMemPattern)===void 0&&(o.enableMemPattern=!0),(s==null?void 0:s.executionMode)===void 0&&(o.executionMode="sequential");const e=(i=>{switch(i){case"sequential":return 0;case"parallel":return 1;default:throw new Error(`unsupported execution mode: ${i}`)}})(o.executionMode);let r=0;if((s==null?void 0:s.logId)!==void 0&&(r=(0,c.allocWasmString)(s.logId,l)),(s==null?void 0:s.logSeverityLevel)===void 0)o.logSeverityLevel=2;else if(typeof s.logSeverityLevel!="number"||!Number.isInteger(s.logSeverityLevel)||s.logSeverityLevel<0||s.logSeverityLevel>4)throw new Error(`log serverity level is not valid: ${s.logSeverityLevel}`);if((s==null?void 0:s.logVerbosityLevel)===void 0)o.logVerbosityLevel=0;else if(typeof s.logVerbosityLevel!="number"||!Number.isInteger(s.logVerbosityLevel))throw new Error(`log verbosity level is not valid: ${s.logVerbosityLevel}`);if((s==null?void 0:s.enableProfiling)===void 0&&(o.enableProfiling=!1),f=h._OrtCreateSessionOptions(t,!!o.enableCpuMemArena,!!o.enableMemPattern,e,!!o.enableProfiling,0,r,o.logSeverityLevel,o.logVerbosityLevel),f===0)throw new Error("Can't create session options");return s!=null&&s.executionProviders&&((i,d,g)=>{for(const m of d){let b=typeof m=="string"?m:m.name;switch(b){case"xnnpack":b="XNNPACK";break;case"wasm":case"cpu":continue;default:throw new Error(`not supported EP: ${b}`)}const _=(0,c.allocWasmString)(b,g);if((0,p.getInstance)()._OrtAppendExecutionProvider(i,_)!==0)throw new Error(`Can't append execution provider: ${b}`)}})(f,s.executionProviders,l),(s==null?void 0:s.extra)!==void 0&&(0,u.iterateExtraOptions)(s.extra,"",new WeakSet,(i,d)=>{const g=(0,c.allocWasmString)(i,l),m=(0,c.allocWasmString)(d,l);if(h._OrtAddSessionConfigEntry(f,g,m)!==0)throw new Error(`Can't set a session config entry: ${i} - ${d}`)}),[f,l]}catch(t){throw f!==0&&h._OrtReleaseSessionOptions(f),l.forEach(h._free),t}}},4983:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.allocWasmString=void 0;const u=a(6361);n.allocWasmString=(c,p)=>{const s=(0,u.getInstance)(),h=s.lengthBytesUTF8(c)+1,f=s._malloc(h);return s.stringToUTF8(c,f,h),p.push(f),f}},349:(y,n,a)=>{Object.defineProperty(n,"__esModule",{value:!0}),n.extractTransferableBuffers=n.endProfiling=n.run=n.releaseSession=n.createSession=n.createSessionFinalize=n.createSessionAllocate=n.initOrt=void 0;const u=a(586),c=a(4919),p=a(4983),s=a(6361);n.initOrt=(t,e)=>{const r=(0,s.getInstance)()._OrtInit(t,e);if(r!==0)throw new Error(`Can't initialize onnxruntime. error code = ${r}`)};const h=new Map;n.createSessionAllocate=t=>{const e=(0,s.getInstance)(),r=e._malloc(t.byteLength);return e.HEAPU8.set(t,r),[r,t.byteLength]},n.createSessionFinalize=(t,e)=>{const r=(0,s.getInstance)();let i=0,d=0,g=[];try{if([d,g]=(0,c.setSessionOptions)(e),i=r._OrtCreateSession(t[0],t[1],d),i===0)throw new Error("Can't create a session")}finally{r._free(t[0]),r._OrtReleaseSessionOptions(d),g.forEach(r._free)}const m=r._OrtGetInputCount(i),b=r._OrtGetOutputCount(i),_=[],v=[],w=[],S=[];for(let A=0;A<m;A++){const O=r._OrtGetInputName(i,A);if(O===0)throw new Error("Can't get an input name");v.push(O),_.push(r.UTF8ToString(O))}for(let A=0;A<b;A++){const O=r._OrtGetOutputName(i,A);if(O===0)throw new Error("Can't get an output name");S.push(O),w.push(r.UTF8ToString(O))}return h.set(i,[i,v,S]),[i,_,w]},n.createSession=(t,e)=>{const r=(0,n.createSessionAllocate)(t);return(0,n.createSessionFinalize)(r,e)},n.releaseSession=t=>{const e=(0,s.getInstance)(),r=h.get(t);if(!r)throw new Error("invalid session id");const i=r[0],d=r[1],g=r[2];d.forEach(e._OrtFree),g.forEach(e._OrtFree),e._OrtReleaseSession(i),h.delete(t)};const f=t=>{switch(t){case"int8":return 3;case"uint8":return 2;case"bool":return 9;case"int16":return 5;case"uint16":return 4;case"int32":return 6;case"uint32":return 12;case"float32":return 1;case"float64":return 11;case"string":return 8;case"int64":return 7;case"uint64":return 13;default:throw new Error(`unsupported data type: ${t}`)}},l=t=>{switch(t){case 3:return"int8";case 2:return"uint8";case 9:return"bool";case 5:return"int16";case 4:return"uint16";case 6:return"int32";case 12:return"uint32";case 1:return"float32";case 11:return"float64";case 8:return"string";case 7:return"int64";case 13:return"uint64";default:throw new Error(`unsupported data type: ${t}`)}},o=t=>{switch(t){case"float32":return Float32Array;case"uint8":case"bool":return Uint8Array;case"int8":return Int8Array;case"uint16":return Uint16Array;case"int16":return Int16Array;case"int32":return Int32Array;case"float64":return Float64Array;case"uint32":return Uint32Array;case"int64":return BigInt64Array;case"uint64":return BigUint64Array;default:throw new Error(`unsupported type: ${t}`)}};n.run=(t,e,r,i,d)=>{const g=(0,s.getInstance)(),m=h.get(t);if(!m)throw new Error("invalid session id");const b=m[0],_=m[1],v=m[2],w=e.length,S=i.length;let A=0,O=[];const x=[],I=[];try{[A,O]=(0,u.setRunOptions)(d);for(let D=0;D<w;D++){const j=r[D][0],Z=r[D][1],X=r[D][2];let J,ee;if(Array.isArray(X)){ee=4*X.length,J=g._malloc(ee),I.push(J);let ve=J/4;for(let oe=0;oe<X.length;oe++){if(typeof X[oe]!="string")throw new TypeError(`tensor data at index ${oe} is not a string`);g.HEAPU32[ve++]=(0,p.allocWasmString)(X[oe],I)}}else ee=X.byteLength,J=g._malloc(ee),I.push(J),g.HEAPU8.set(new Uint8Array(X.buffer,X.byteOffset,ee),J);const ue=g.stackSave(),Ae=g.stackAlloc(4*Z.length);try{let ve=Ae/4;Z.forEach(_e=>g.HEAP32[ve++]=_e);const oe=g._OrtCreateTensor(f(j),J,ee,Ae,Z.length);if(oe===0)throw new Error("Can't create a tensor");x.push(oe)}finally{g.stackRestore(ue)}}const N=g.stackSave(),B=g.stackAlloc(4*w),L=g.stackAlloc(4*w),F=g.stackAlloc(4*S),H=g.stackAlloc(4*S);try{let D=B/4,j=L/4,Z=F/4,X=H/4;for(let ue=0;ue<w;ue++)g.HEAPU32[D++]=x[ue],g.HEAPU32[j++]=_[e[ue]];for(let ue=0;ue<S;ue++)g.HEAPU32[Z++]=0,g.HEAPU32[X++]=v[i[ue]];let J=g._OrtRun(b,L,B,w,H,S,F,A);const ee=[];if(J===0)for(let ue=0;ue<S;ue++){const Ae=g.HEAPU32[F/4+ue],ve=g.stackSave(),oe=g.stackAlloc(16);let _e,be=0;try{if(J=g._OrtGetTensorData(Ae,oe,oe+4,oe+8,oe+12),J!==0)throw new Error(`Can't access output tensor data. error code = ${J}`);let ke=oe/4;const Fe=g.HEAPU32[ke++];be=g.HEAPU32[ke++];const xe=g.HEAPU32[ke++],Ne=g.HEAPU32[ke++],Ce=[];for(let Oe=0;Oe<Ne;Oe++)Ce.push(g.HEAPU32[xe/4+Oe]);g._OrtFree(xe);const Ee=Ce.length===0?1:Ce.reduce((Oe,Be)=>Oe*Be);if(_e=l(Fe),_e==="string"){const Oe=[];let Be=be/4;for(let Ge=0;Ge<Ee;Ge++){const Ve=g.HEAPU32[Be++],Xe=Ge===Ee-1?void 0:g.HEAPU32[Be]-Ve;Oe.push(g.UTF8ToString(Ve,Xe))}ee.push([_e,Ce,Oe])}else{const Oe=new(o(_e))(Ee);new Uint8Array(Oe.buffer,Oe.byteOffset,Oe.byteLength).set(g.HEAPU8.subarray(be,be+Oe.byteLength)),ee.push([_e,Ce,Oe])}}finally{g.stackRestore(ve),_e==="string"&&be&&g._free(be),g._OrtReleaseTensor(Ae)}}if(J===0)return ee;throw new Error(`failed to call OrtRun(). error code = ${J}.`)}finally{g.stackRestore(N)}}finally{x.forEach(g._OrtReleaseTensor),I.forEach(g._free),g._OrtReleaseRunOptions(A),O.forEach(g._free)}},n.endProfiling=t=>{const e=(0,s.getInstance)(),r=h.get(t);if(!r)throw new Error("invalid session id");const i=r[0],d=e._OrtEndProfiling(i);if(d===0)throw new Error("Can't get an profile file name");e._OrtFree(d)},n.extractTransferableBuffers=t=>{const e=[];for(const r of t){const i=r[2];!Array.isArray(i)&&i.buffer&&e.push(i.buffer)}return e}},6361:function(y,n,a){var u=this&&this.__createBinding||(Object.create?function(d,g,m,b){b===void 0&&(b=m);var _=Object.getOwnPropertyDescriptor(g,m);_&&!("get"in _?!g.__esModule:_.writable||_.configurable)||(_={enumerable:!0,get:function(){return g[m]}}),Object.defineProperty(d,b,_)}:function(d,g,m,b){b===void 0&&(b=m),d[b]=g[m]}),c=this&&this.__setModuleDefault||(Object.create?function(d,g){Object.defineProperty(d,"default",{enumerable:!0,value:g})}:function(d,g){d.default=g}),p=this&&this.__importStar||function(d){if(d&&d.__esModule)return d;var g={};if(d!=null)for(var m in d)m!=="default"&&Object.prototype.hasOwnProperty.call(d,m)&&u(g,d,m);return c(g,d),g},s=this&&this.__importDefault||function(d){return d&&d.__esModule?d:{default:d}};Object.defineProperty(n,"__esModule",{value:!0}),n.dispose=n.getInstance=n.initializeWebAssembly=void 0;const h=p(a(6449)),f=s(a(932)),l=a(3474);let o,t=!1,e=!1,r=!1;const i=(d,g)=>g?d?"ort-wasm-simd-threaded.wasm":"ort-wasm-threaded.wasm":d?"ort-wasm-simd.wasm":"ort-wasm.wasm";n.initializeWebAssembly=async d=>{if(t)return Promise.resolve();if(e)throw new Error("multiple calls to 'initializeWebAssembly()' detected.");if(r)throw new Error("previous call to 'initializeWebAssembly()' failed.");e=!0;const g=d.initTimeout,m=d.numThreads,b=d.simd,_=m>1&&(()=>{try{return typeof SharedArrayBuffer<"u"&&(typeof MessageChannel<"u"&&new MessageChannel().port1.postMessage(new SharedArrayBuffer(1)),WebAssembly.validate(new Uint8Array([0,97,115,109,1,0,0,0,1,4,1,96,0,0,3,2,1,0,5,4,1,3,1,1,10,11,1,9,0,65,0,254,16,2,0,26,11])))}catch{return!1}})(),v=b&&(()=>{try{return WebAssembly.validate(new Uint8Array([0,97,115,109,1,0,0,0,1,4,1,96,0,0,3,2,1,0,10,30,1,28,0,65,0,253,15,253,12,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,253,186,1,26,11]))}catch{return!1}})(),w=typeof d.wasmPaths=="string"?d.wasmPaths:void 0,S=i(!1,_),A=i(v,_),O=typeof d.wasmPaths=="object"?d.wasmPaths[A]:void 0;let x=!1;const I=[];if(g>0&&I.push(new Promise(N=>{setTimeout(()=>{x=!0,N()},g)})),I.push(new Promise((N,B)=>{const L=_?l:f.default,F={locateFile:(H,D)=>_&&H.endsWith(".worker.js")&&typeof Blob<"u"?URL.createObjectURL(new Blob([a(4154)],{type:"text/javascript"})):H===S?O??(w??D)+A:D+H};if(_)if(typeof Blob>"u")F.mainScriptUrlOrBlob=h.join("/","ort-wasm-threaded.js");else{const H=`var ortWasmThreaded=(function(){var _scriptDir;return ${L.toString()}})();`;F.mainScriptUrlOrBlob=new Blob([H],{type:"text/javascript"})}L(F).then(H=>{e=!1,t=!0,o=H,N()},H=>{e=!1,r=!0,B(H)})})),await Promise.race(I),x)throw new Error(`WebAssembly backend initializing failed due to timeout: ${g}ms`)},n.getInstance=()=>{if(t&&o)return o;throw new Error("WebAssembly is not initialized yet.")},n.dispose=()=>{var d;!t||e||r||(e=!0,(d=o.PThread)===null||d===void 0||d.terminateAllThreads(),o=void 0,e=!1,t=!1,r=!0)}},9710:(y,n,a)=>{a.d(n,{Z:()=>p});var u=a(477),c=a.n(u);function p(){return c()('/*!\n* ONNX Runtime Web v1.14.0\n* Copyright (c) Microsoft Corporation. All rights reserved.\n* Licensed under the MIT License.\n*/\n(()=>{var t={474:(t,e,n)=>{var _scriptDir,r=(_scriptDir=(_scriptDir="undefined"!=typeof document&&document.currentScript?document.currentScript.src:void 0)||"/index.js",function(t){function e(){return j.buffer!=D&&N(j.buffer),P}function r(){return j.buffer!=D&&N(j.buffer),U}function a(){return j.buffer!=D&&N(j.buffer),F}function i(){return j.buffer!=D&&N(j.buffer),I}function o(){return j.buffer!=D&&N(j.buffer),W}var u,c,s;t=t||{},u||(u=void 0!==t?t:{}),u.ready=new Promise((function(t,e){c=t,s=e}));var l,f,p,h,d,y,b=Object.assign({},u),m="./this.program",g=(t,e)=>{throw e},v="object"==typeof window,w="function"==typeof importScripts,_="object"==typeof process&&"object"==typeof process.versions&&"string"==typeof process.versions.node,O=u.ENVIRONMENT_IS_PTHREAD||!1,A="";function S(t){return u.locateFile?u.locateFile(t,A):A+t}if(_){let e;A=w?n(908).dirname(A)+"/":"//",y=()=>{d||(h=n(384),d=n(908))},l=function(t,e){return y(),t=d.normalize(t),h.readFileSync(t,e?void 0:"utf8")},p=t=>((t=l(t,!0)).buffer||(t=new Uint8Array(t)),t),f=(t,e,n)=>{y(),t=d.normalize(t),h.readFile(t,(function(t,r){t?n(t):e(r.buffer)}))},1<process.argv.length&&(m=process.argv[1].replace(/\\\\/g,"/")),process.argv.slice(2),process.on("uncaughtException",(function(t){if(!(t instanceof ct))throw t})),process.on("unhandledRejection",(function(t){throw t})),g=(t,e)=>{if(Q())throw process.exitCode=t,e;e instanceof ct||x("exiting due to exception: "+e),process.exit(t)},u.inspect=function(){return"[Emscripten Module object]"};try{e=n(925)}catch(t){throw console.error(\'The "worker_threads" module is not supported in this node.js build - perhaps a newer version is needed?\'),t}n.g.Worker=e.Worker}else(v||w)&&(w?A=self.location.href:"undefined"!=typeof document&&document.currentScript&&(A=document.currentScript.src),_scriptDir&&(A=_scriptDir),A=0!==A.indexOf("blob:")?A.substr(0,A.replace(/[?#].*/,"").lastIndexOf("/")+1):"",_||(l=t=>{var e=new XMLHttpRequest;return e.open("GET",t,!1),e.send(null),e.responseText},w&&(p=t=>{var e=new XMLHttpRequest;return e.open("GET",t,!1),e.responseType="arraybuffer",e.send(null),new Uint8Array(e.response)}),f=(t,e,n)=>{var r=new XMLHttpRequest;r.open("GET",t,!0),r.responseType="arraybuffer",r.onload=()=>{200==r.status||0==r.status&&r.response?e(r.response):n()},r.onerror=n,r.send(null)}));_&&"undefined"==typeof performance&&(n.g.performance=n(953).performance);var T=console.log.bind(console),E=console.warn.bind(console);_&&(y(),T=t=>h.writeSync(1,t+"\\n"),E=t=>h.writeSync(2,t+"\\n"));var M,C=u.print||T,x=u.printErr||E;Object.assign(u,b),b=null,u.thisProgram&&(m=u.thisProgram),u.quit&&(g=u.quit),u.wasmBinary&&(M=u.wasmBinary);var R=u.noExitRuntime||!1;"object"!=typeof WebAssembly&&at("no native wasm support detected");var j,k,D,P,U,F,I,W,H=!1,L="undefined"!=typeof TextDecoder?new TextDecoder("utf8"):void 0;function z(t,e,n){var r=(e>>>=0)+n;for(n=e;t[n]&&!(n>=r);)++n;if(16<n-e&&t.buffer&&L)return L.decode(t.buffer instanceof SharedArrayBuffer?t.slice(e,n):t.subarray(e,n));for(r="";e<n;){var a=t[e++];if(128&a){var i=63&t[e++];if(192==(224&a))r+=String.fromCharCode((31&a)<<6|i);else{var o=63&t[e++];65536>(a=224==(240&a)?(15&a)<<12|i<<6|o:(7&a)<<18|i<<12|o<<6|63&t[e++])?r+=String.fromCharCode(a):(a-=65536,r+=String.fromCharCode(55296|a>>10,56320|1023&a))}}else r+=String.fromCharCode(a)}return r}function Y(t,e){return(t>>>=0)?z(r(),t,e):""}function B(t,e,n,r){if(!(0<r))return 0;var a=n>>>=0;r=n+r-1;for(var i=0;i<t.length;++i){var o=t.charCodeAt(i);if(55296<=o&&57343>=o&&(o=65536+((1023&o)<<10)|1023&t.charCodeAt(++i)),127>=o){if(n>=r)break;e[n++>>>0]=o}else{if(2047>=o){if(n+1>=r)break;e[n++>>>0]=192|o>>6}else{if(65535>=o){if(n+2>=r)break;e[n++>>>0]=224|o>>12}else{if(n+3>=r)break;e[n++>>>0]=240|o>>18,e[n++>>>0]=128|o>>12&63}e[n++>>>0]=128|o>>6&63}e[n++>>>0]=128|63&o}}return e[n>>>0]=0,n-a}function G(t){for(var e=0,n=0;n<t.length;++n){var r=t.charCodeAt(n);127>=r?e++:2047>=r?e+=2:55296<=r&&57343>=r?(e+=4,++n):e+=3}return e}function N(t){D=t,u.HEAP8=P=new Int8Array(t),u.HEAP16=new Int16Array(t),u.HEAP32=F=new Int32Array(t),u.HEAPU8=U=new Uint8Array(t),u.HEAPU16=new Uint16Array(t),u.HEAPU32=I=new Uint32Array(t),u.HEAPF32=new Float32Array(t),u.HEAPF64=W=new Float64Array(t)}O&&(D=u.buffer);var V=u.INITIAL_MEMORY||16777216;if(O)j=u.wasmMemory,D=u.buffer;else if(u.wasmMemory)j=u.wasmMemory;else if(!((j=new WebAssembly.Memory({initial:V/65536,maximum:65536,shared:!0})).buffer instanceof SharedArrayBuffer))throw x("requested a shared WebAssembly.Memory but the returned buffer is not a SharedArrayBuffer, indicating that while the browser has SharedArrayBuffer it does not have WebAssembly threads support - you may need to set a flag"),_&&console.log("(on node you may need: --experimental-wasm-threads --experimental-wasm-bulk-memory and also use a recent version)"),Error("bad memory");j&&(D=j.buffer),V=D.byteLength,N(D);var $,q=[],X=[],J=[],Z=[];function Q(){return R||!1}function K(){var t=u.preRun.shift();q.unshift(t)}var tt,et=0,nt=null,rt=null;function at(t){throw O?postMessage({cmd:"onAbort",arg:t}):u.onAbort&&u.onAbort(t),x(t="Aborted("+t+")"),H=!0,t=new WebAssembly.RuntimeError(t+". Build with -sASSERTIONS for more info."),s(t),t}function it(){return tt.startsWith("data:application/octet-stream;base64,")}function ot(){var t=tt;try{if(t==tt&&M)return new Uint8Array(M);if(p)return p(t);throw"both async and sync fetching of the wasm failed"}catch(t){at(t)}}tt="ort-wasm-threaded.wasm",it()||(tt=S(tt));var ut={};function ct(t){this.name="ExitStatus",this.message="Program terminated with exit("+t+")",this.status=t}function st(t){(t=ht.Vb[t])||at(),ht.mc(t)}function lt(t){var e=ht.Cc();if(!e)return 6;ht.ac.push(e),ht.Vb[t.Ub]=e,e.Ub=t.Ub;var n={cmd:"run",start_routine:t.Ic,arg:t.zc,pthread_ptr:t.Ub};return e.$b=()=>{n.time=performance.now(),e.postMessage(n,t.Nc)},e.loaded&&(e.$b(),delete e.$b),0}function ft(t){if(O)return $t(1,1,t);Q()||(ht.oc(),u.onExit&&u.onExit(t),H=!0),g(t,new ct(t))}function pt(t,e){if(!e&&O)throw bt(t),"unwind";Q()||O||(me(),dt(J),be(0),re[1].length&&ae(1,10),re[2].length&&ae(2,10),ht.oc()),ft(t)}var ht={Yb:[],ac:[],qc:[],Vb:{},fc:function(){O&&ht.Ec()},Pc:function(){},Ec:function(){ht.receiveObjectTransfer=ht.Gc,ht.threadInitTLS=ht.pc,ht.setExitStatus=ht.nc,R=!1},nc:function(){},oc:function(){for(var t of Object.values(ht.Vb))ht.mc(t);for(t of ht.Yb)t.terminate();ht.Yb=[]},mc:function(t){var e=t.Ub;delete ht.Vb[e],ht.Yb.push(t),ht.ac.splice(ht.ac.indexOf(t),1),t.Ub=0,Oe(e)},Gc:function(){},pc:function(){ht.qc.forEach((t=>t()))},Fc:function(t,e){t.onmessage=n=>{var r=(n=n.data).cmd;if(t.Ub&&(ht.Bc=t.Ub),n.targetThread&&n.targetThread!=he()){var a=ht.Vb[n.Qc];a?a.postMessage(n,n.transferList):x(\'Internal error! Worker sent a message "\'+r+\'" to target pthread \'+n.targetThread+", but that thread no longer exists!")}else"processProxyingQueue"===r?zt(n.queue):"spawnThread"===r?lt(n):"cleanupThread"===r?st(n.thread):"killThread"===r?(n=n.thread,r=ht.Vb[n],delete ht.Vb[n],r.terminate(),Oe(n),ht.ac.splice(ht.ac.indexOf(r),1),r.Ub=0):"cancelThread"===r?ht.Vb[n.thread].postMessage({cmd:"cancel"}):"loaded"===r?(t.loaded=!0,e&&e(t),t.$b&&(t.$b(),delete t.$b)):"print"===r?C("Thread "+n.threadId+": "+n.text):"printErr"===r?x("Thread "+n.threadId+": "+n.text):"alert"===r?alert("Thread "+n.threadId+": "+n.text):"setimmediate"===n.target?t.postMessage(n):"onAbort"===r?u.onAbort&&u.onAbort(n.arg):r&&x("worker sent an unknown command "+r);ht.Bc=void 0},t.onerror=t=>{throw x("worker sent an error! "+t.filename+":"+t.lineno+": "+t.message),t},_&&(t.on("message",(function(e){t.onmessage({data:e})})),t.on("error",(function(e){t.onerror(e)})),t.on("detachedExit",(function(){}))),t.postMessage({cmd:"load",urlOrBlob:u.mainScriptUrlOrBlob||_scriptDir,wasmMemory:j,wasmModule:k})},yc:function(){var t=S("ort-wasm-threaded.worker.js");ht.Yb.push(new Worker(t))},Cc:function(){return 0==ht.Yb.length&&(ht.yc(),ht.Fc(ht.Yb[0])),ht.Yb.pop()}};function dt(t){for(;0<t.length;)t.shift()(u)}function yt(t){var e=Ee();return t=t(),Me(e),t}function bt(t){if(O)return $t(2,0,t);try{pt(t)}catch(t){t instanceof ct||"unwind"==t||g(1,t)}}u.PThread=ht,u.establishStackSpace=function(){var t=he(),e=a()[t+44>>2>>>0];t=a()[t+48>>2>>>0],Te(e,e-t),Me(e)};var mt=[];function gt(t){var e=mt[t];return e||(t>=mt.length&&(mt.length=t+1),mt[t]=e=$.get(t)),e}u.invokeEntryPoint=function(t,e){t=gt(t)(e),Q()?ht.nc(t):Ae(t)};var vt,wt,_t=[],Ot=0,At=0;function St(t){this.Zb=t,this.Sb=t-24,this.xc=function(t){i()[this.Sb+4>>2>>>0]=t},this.bc=function(){return i()[this.Sb+4>>2>>>0]},this.wc=function(t){i()[this.Sb+8>>2>>>0]=t},this.Dc=function(){return i()[this.Sb+8>>2>>>0]},this.rc=function(){a()[this.Sb>>2>>>0]=0},this.hc=function(t){t=t?1:0,e()[this.Sb+12>>0>>>0]=t},this.uc=function(){return 0!=e()[this.Sb+12>>0>>>0]},this.ic=function(t){t=t?1:0,e()[this.Sb+13>>0>>>0]=t},this.kc=function(){return 0!=e()[this.Sb+13>>0>>>0]},this.fc=function(t,e){this.cc(0),this.xc(t),this.wc(e),this.rc(),this.hc(!1),this.ic(!1)},this.sc=function(){Atomics.add(a(),this.Sb>>2,1)},this.Hc=function(){return 1===Atomics.sub(a(),this.Sb>>2,1)},this.cc=function(t){i()[this.Sb+16>>2>>>0]=t},this.tc=function(){return i()[this.Sb+16>>2>>>0]},this.vc=function(){if(Re(this.bc()))return i()[this.Zb>>2>>>0];var t=this.tc();return 0!==t?t:this.Zb}}function Tt(t){return ye(new St(t).Sb)}function Et(t,e,n,r){return O?$t(3,1,t,e,n,r):Mt(t,e,n,r)}function Mt(t,e,n,r){if("undefined"==typeof SharedArrayBuffer)return x("Current environment does not support SharedArrayBuffer, pthreads are not available!"),6;var a=[];return O&&0===a.length?Et(t,e,n,r):(t={Ic:n,Ub:t,zc:r,Nc:a},O?(t.Oc="spawnThread",postMessage(t,a),0):lt(t))}function Ct(t,e,n){return O?$t(4,1,t,e,n):0}function xt(t,e){if(O)return $t(5,1,t,e)}function Rt(t,e){if(O)return $t(6,1,t,e)}function jt(t,e,n){if(O)return $t(7,1,t,e,n)}function kt(t,e,n){return O?$t(8,1,t,e,n):0}function Dt(t,e){if(O)return $t(9,1,t,e)}function Pt(t,e,n){if(O)return $t(10,1,t,e,n)}function Ut(t,e,n,r){if(O)return $t(11,1,t,e,n,r)}function Ft(t,e,n,r){if(O)return $t(12,1,t,e,n,r)}function It(t,e,n,r){if(O)return $t(13,1,t,e,n,r)}function Wt(t){if(O)return $t(14,1,t)}function Ht(t,e){if(O)return $t(15,1,t,e)}function Lt(t,e,n){if(O)return $t(16,1,t,e,n)}function zt(t){Atomics.store(a(),t>>2,1),he()&&_e(t),Atomics.compareExchange(a(),t>>2,1,0)}function Yt(t){return i()[t>>>2]+4294967296*a()[t+4>>>2]}function Bt(t,e,n,r,a,i){return O?$t(17,1,t,e,n,r,a,i):-52}function Gt(t,e,n,r,a,i){if(O)return $t(18,1,t,e,n,r,a,i)}function Nt(t){var n=G(t)+1,r=de(n);return r&&B(t,e(),r,n),r}function Vt(t,e,n){function r(t){return(t=t.toTimeString().match(/\\(([A-Za-z ]+)\\)$/))?t[1]:"GMT"}if(O)return $t(19,1,t,e,n);var o=(new Date).getFullYear(),u=new Date(o,0,1),c=new Date(o,6,1);o=u.getTimezoneOffset();var s=c.getTimezoneOffset(),l=Math.max(o,s);a()[t>>2>>>0]=60*l,a()[e>>2>>>0]=Number(o!=s),t=r(u),e=r(c),t=Nt(t),e=Nt(e),s<o?(i()[n>>2>>>0]=t,i()[n+4>>2>>>0]=e):(i()[n>>2>>>0]=e,i()[n+4>>2>>>0]=t)}function $t(t,e){var n=arguments.length-2,r=arguments;return yt((()=>{for(var a=Ce(8*n),i=a>>3,u=0;u<n;u++){var c=r[2+u];o()[i+u>>>0]=c}return we(t,n,a,e)}))}u.executeNotifiedProxyingQueue=zt,wt=_?()=>{var t=process.hrtime();return 1e3*t[0]+t[1]/1e6}:O?()=>performance.now()-u.__performance_now_clock_drift:()=>performance.now();var qt,Xt=[],Jt={};function Zt(){if(!qt){var t,e={USER:"web_user",LOGNAME:"web_user",PATH:"/",PWD:"/",HOME:"/home/web_user",LANG:("object"==typeof navigator&&navigator.languages&&navigator.languages[0]||"C").replace("-","_")+".UTF-8",_:m||"./this.program"};for(t in Jt)void 0===Jt[t]?delete e[t]:e[t]=Jt[t];var n=[];for(t in e)n.push(t+"="+e[t]);qt=n}return qt}function Qt(t,n){if(O)return $t(20,1,t,n);var r=0;return Zt().forEach((function(a,o){var u=n+r;for(o=i()[t+4*o>>2>>>0]=u,u=0;u<a.length;++u)e()[o++>>0>>>0]=a.charCodeAt(u);e()[o>>0>>>0]=0,r+=a.length+1})),0}function Kt(t,e){if(O)return $t(21,1,t,e);var n=Zt();i()[t>>2>>>0]=n.length;var r=0;return n.forEach((function(t){r+=t.length+1})),i()[e>>2>>>0]=r,0}function te(t){return O?$t(22,1,t):52}function ee(t,e,n,r){return O?$t(23,1,t,e,n,r):52}function ne(t,e,n,r,a){return O?$t(24,1,t,e,n,r,a):70}var re=[null,[],[]];function ae(t,e){var n=re[t];0===e||10===e?((1===t?C:x)(z(n,0)),n.length=0):n.push(e)}function ie(t,e,n,a){if(O)return $t(25,1,t,e,n,a);for(var o=0,u=0;u<n;u++){var c=i()[e>>2>>>0],s=i()[e+4>>2>>>0];e+=8;for(var l=0;l<s;l++)ae(t,r()[c+l>>>0]);o+=s}return i()[a>>2>>>0]=o,0}var oe=0;function ue(t){return 0==t%4&&(0!=t%100||0==t%400)}var ce=[31,29,31,30,31,30,31,31,30,31,30,31],se=[31,28,31,30,31,30,31,31,30,31,30,31];function le(t,n,r,i){function o(t,e,n){for(t="number"==typeof t?t.toString():t||"";t.length<e;)t=n[0]+t;return t}function u(t,e){return o(t,e,"0")}function c(t,e){function n(t){return 0>t?-1:0<t?1:0}var r;return 0===(r=n(t.getFullYear()-e.getFullYear()))&&0===(r=n(t.getMonth()-e.getMonth()))&&(r=n(t.getDate()-e.getDate())),r}function s(t){switch(t.getDay()){case 0:return new Date(t.getFullYear()-1,11,29);case 1:return t;case 2:return new Date(t.getFullYear(),0,3);case 3:return new Date(t.getFullYear(),0,2);case 4:return new Date(t.getFullYear(),0,1);case 5:return new Date(t.getFullYear()-1,11,31);case 6:return new Date(t.getFullYear()-1,11,30)}}function l(t){var e=t.Wb;for(t=new Date(new Date(t.Xb+1900,0,1).getTime());0<e;){var n=t.getMonth(),r=(ue(t.getFullYear())?ce:se)[n];if(!(e>r-t.getDate())){t.setDate(t.getDate()+e);break}e-=r-t.getDate()+1,t.setDate(1),11>n?t.setMonth(n+1):(t.setMonth(0),t.setFullYear(t.getFullYear()+1))}return n=new Date(t.getFullYear()+1,0,4),e=s(new Date(t.getFullYear(),0,4)),n=s(n),0>=c(e,t)?0>=c(n,t)?t.getFullYear()+1:t.getFullYear():t.getFullYear()-1}var f=a()[i+40>>2>>>0];for(var p in i={Lc:a()[i>>2>>>0],Kc:a()[i+4>>2>>>0],dc:a()[i+8>>2>>>0],jc:a()[i+12>>2>>>0],ec:a()[i+16>>2>>>0],Xb:a()[i+20>>2>>>0],Tb:a()[i+24>>2>>>0],Wb:a()[i+28>>2>>>0],Rc:a()[i+32>>2>>>0],Jc:a()[i+36>>2>>>0],Mc:f?Y(f):""},r=Y(r),f={"%c":"%a %b %d %H:%M:%S %Y","%D":"%m/%d/%y","%F":"%Y-%m-%d","%h":"%b","%r":"%I:%M:%S %p","%R":"%H:%M","%T":"%H:%M:%S","%x":"%m/%d/%y","%X":"%H:%M:%S","%Ec":"%c","%EC":"%C","%Ex":"%m/%d/%y","%EX":"%H:%M:%S","%Ey":"%y","%EY":"%Y","%Od":"%d","%Oe":"%e","%OH":"%H","%OI":"%I","%Om":"%m","%OM":"%M","%OS":"%S","%Ou":"%u","%OU":"%U","%OV":"%V","%Ow":"%w","%OW":"%W","%Oy":"%y"})r=r.replace(new RegExp(p,"g"),f[p]);var h="Sunday Monday Tuesday Wednesday Thursday Friday Saturday".split(" "),d="January February March April May June July August September October November December".split(" ");for(p in f={"%a":function(t){return h[t.Tb].substring(0,3)},"%A":function(t){return h[t.Tb]},"%b":function(t){return d[t.ec].substring(0,3)},"%B":function(t){return d[t.ec]},"%C":function(t){return u((t.Xb+1900)/100|0,2)},"%d":function(t){return u(t.jc,2)},"%e":function(t){return o(t.jc,2," ")},"%g":function(t){return l(t).toString().substring(2)},"%G":function(t){return l(t)},"%H":function(t){return u(t.dc,2)},"%I":function(t){return 0==(t=t.dc)?t=12:12<t&&(t-=12),u(t,2)},"%j":function(t){for(var e=0,n=0;n<=t.ec-1;e+=(ue(t.Xb+1900)?ce:se)[n++]);return u(t.jc+e,3)},"%m":function(t){return u(t.ec+1,2)},"%M":function(t){return u(t.Kc,2)},"%n":function(){return"\\n"},"%p":function(t){return 0<=t.dc&&12>t.dc?"AM":"PM"},"%S":function(t){return u(t.Lc,2)},"%t":function(){return"\\t"},"%u":function(t){return t.Tb||7},"%U":function(t){return u(Math.floor((t.Wb+7-t.Tb)/7),2)},"%V":function(t){var e=Math.floor((t.Wb+7-(t.Tb+6)%7)/7);if(2>=(t.Tb+371-t.Wb-2)%7&&e++,e)53==e&&(4==(n=(t.Tb+371-t.Wb)%7)||3==n&&ue(t.Xb)||(e=1));else{e=52;var n=(t.Tb+7-t.Wb-1)%7;(4==n||5==n&&ue(t.Xb%400-1))&&e++}return u(e,2)},"%w":function(t){return t.Tb},"%W":function(t){return u(Math.floor((t.Wb+7-(t.Tb+6)%7)/7),2)},"%y":function(t){return(t.Xb+1900).toString().substring(2)},"%Y":function(t){return t.Xb+1900},"%z":function(t){var e=0<=(t=t.Jc);return t=Math.abs(t)/60,(e?"+":"-")+String("0000"+(t/60*100+t%60)).slice(-4)},"%Z":function(t){return t.Mc},"%%":function(){return"%"}},r=r.replace(/%%/g,"\\0\\0"),f)r.includes(p)&&(r=r.replace(new RegExp(p,"g"),f[p](i)));return p=function(t){var e=Array(G(t)+1);return B(t,e,0,e.length),e}(r=r.replace(/\\0\\0/g,"%")),p.length>n?0:(function(t,n){e().set(t,n>>>0)}(p,t),p.length-1)}ht.fc();var fe=[null,ft,bt,Et,Ct,xt,Rt,jt,kt,Dt,Pt,Ut,Ft,It,Wt,Ht,Lt,Bt,Gt,Vt,Qt,Kt,te,ee,ne,ie],pe={b:function(t){return de(t+24)+24},n:function(t){return(t=new St(t)).uc()||(t.hc(!0),Ot--),t.ic(!1),_t.push(t),t.sc(),t.vc()},ma:function(t){throw x("Unexpected exception thrown, this is not properly supported - aborting"),H=!0,t},x:function(){Se(0);var t=_t.pop();if(t.Hc()&&!t.kc()){var e=t.Dc();e&>(e)(t.Zb),Tt(t.Zb)}At=0},e:function(){var t=At;if(!t)return oe=0;var e=new St(t);e.cc(t);var n=e.bc();if(!n)return oe=0,t;for(var r=Array.prototype.slice.call(arguments),a=0;a<r.length;a++){var i=r[a];if(0===i||i===n)break;if(xe(i,n,e.Sb+16))return oe=i,t}return oe=n,t},l:function(){var t=At;if(!t)return oe=0;var e=new St(t);e.cc(t);var n=e.bc();if(!n)return oe=0,t;for(var r=Array.prototype.slice.call(arguments),a=0;a<r.length;a++){var i=r[a];if(0===i||i===n)break;if(xe(i,n,e.Sb+16))return oe=i,t}return oe=n,t},h:function(){var t=At;if(!t)return oe=0;var e=new St(t);e.cc(t);var n=e.bc();if(!n)return oe=0,t;for(var r=Array.prototype.slice.call(arguments),a=0;a<r.length;a++){var i=r[a];if(0===i||i===n)break;if(xe(i,n,e.Sb+16))return oe=i,t}return oe=n,t},t:Tt,M:function(){var t=_t.pop();t||at("no exception to throw");var e=t.Zb;throw t.kc()||(_t.push(t),t.ic(!0),t.hc(!1),Ot++),At=e,e},c:function(t,e,n){throw new St(t).fc(e,n),At=t,Ot++,t},pa:function(){return Ot},Fa:function(t){ge(t,!w,1,!v),ht.pc()},T:function(t){O?postMessage({cmd:"cleanupThread",thread:t}):st(t)},xa:Mt,j:function(t){throw At||(At=t),t},H:Ct,Ma:xt,ua:Rt,wa:jt,oa:kt,Ka:Dt,Ca:Pt,Ja:Ut,V:Ft,va:It,sa:Wt,La:Ht,ta:Lt,Ta:function(){},X:function(){at("To use dlopen, you need enable dynamic linking, see https://github.com/emscripten-core/emscripten/wiki/Linking")},Ua:function(){at("To use dlopen, you need enable dynamic linking, see https://github.com/emscripten-core/emscripten/wiki/Linking")},W:function(){return Date.now()},ya:function(){return 2097152},Oa:function(){return!0},za:function(t,e,n,r){if(t==e)setTimeout((()=>zt(r)));else if(O)postMessage({targetThread:t,cmd:"processProxyingQueue",queue:r});else{if(!(t=ht.Vb[t]))return;t.postMessage({cmd:"processProxyingQueue",queue:r})}return 1},Ea:function(){return-1},Pa:function(t,e){t=new Date(1e3*Yt(t)),a()[e>>2>>>0]=t.getUTCSeconds(),a()[e+4>>2>>>0]=t.getUTCMinutes(),a()[e+8>>2>>>0]=t.getUTCHours(),a()[e+12>>2>>>0]=t.getUTCDate(),a()[e+16>>2>>>0]=t.getUTCMonth(),a()[e+20>>2>>>0]=t.getUTCFullYear()-1900,a()[e+24>>2>>>0]=t.getUTCDay(),t=(t.getTime()-Date.UTC(t.getUTCFullYear(),0,1,0,0,0,0))/864e5|0,a()[e+28>>2>>>0]=t},Qa:function(t,e){t=new Date(1e3*Yt(t)),a()[e>>2>>>0]=t.getSeconds(),a()[e+4>>2>>>0]=t.getMinutes(),a()[e+8>>2>>>0]=t.getHours(),a()[e+12>>2>>>0]=t.getDate(),a()[e+16>>2>>>0]=t.getMonth(),a()[e+20>>2>>>0]=t.getFullYear()-1900,a()[e+24>>2>>>0]=t.getDay();var n=new Date(t.getFullYear(),0,1),r=(t.getTime()-n.getTime())/864e5|0;a()[e+28>>2>>>0]=r,a()[e+36>>2>>>0]=-60*t.getTimezoneOffset(),r=new Date(t.getFullYear(),6,1).getTimezoneOffset(),t=0|(r!=(n=n.getTimezoneOffset())&&t.getTimezoneOffset()==Math.min(n,r)),a()[e+32>>2>>>0]=t},Ra:function(t){var e=new Date(a()[t+20>>2>>>0]+1900,a()[t+16>>2>>>0],a()[t+12>>2>>>0],a()[t+8>>2>>>0],a()[t+4>>2>>>0],a()[t>>2>>>0],0),n=a()[t+32>>2>>>0],r=e.getTimezoneOffset(),i=new Date(e.getFullYear(),0,1),o=new Date(e.getFullYear(),6,1).getTimezoneOffset(),u=i.getTimezoneOffset(),c=Math.min(u,o);return 0>n?a()[t+32>>2>>>0]=Number(o!=u&&c==r):0<n!=(c==r)&&(o=Math.max(u,o),e.setTime(e.getTime()+6e4*((0<n?c:o)-r))),a()[t+24>>2>>>0]=e.getDay(),n=(e.getTime()-i.getTime())/864e5|0,a()[t+28>>2>>>0]=n,a()[t>>2>>>0]=e.getSeconds(),a()[t+4>>2>>>0]=e.getMinutes(),a()[t+8>>2>>>0]=e.getHours(),a()[t+12>>2>>>0]=e.getDate(),a()[t+16>>2>>>0]=e.getMonth(),e.getTime()/1e3|0},Aa:Bt,Ba:Gt,Sa:function t(e,n,r){t.Ac||(t.Ac=!0,Vt(e,n,r))},y:function(){at("")},U:function(){if(!_&&!w){var t="Blocking on the main thread is very dangerous, see https://emscripten.org/docs/porting/pthreads.html#blocking-on-the-main-browser-thread";vt||(vt={}),vt[t]||(vt[t]=1,_&&(t="warning: "+t),x(t))}},ra:function(){return 4294901760},B:wt,Ia:function(t,e,n){r().copyWithin(t>>>0,e>>>0,e+n>>>0)},F:function(){return _?n(993).cpus().length:navigator.hardwareConcurrency},Da:function(t,e,n){Xt.length=e,n>>=3;for(var r=0;r<e;r++)Xt[r]=o()[n+r>>>0];return(0>t?ut[-t-1]:fe[t]).apply(null,Xt)},qa:function(t){var e=r().length;if((t>>>=0)<=e||4294901760<t)return!1;for(var n=1;4>=n;n*=2){var a=e*(1+.2/n);a=Math.min(a,t+100663296);var i=Math;a=Math.max(t,a),i=i.min.call(i,4294901760,a+(65536-a%65536)%65536);t:{try{j.grow(i-D.byteLength+65535>>>16),N(j.buffer);var o=1;break t}catch(t){}o=void 0}if(o)return!0}return!1},Na:function(){throw"unwind"},Ga:Qt,Ha:Kt,J:pt,I:te,S:ee,ga:ne,R:ie,d:function(){return oe},na:function t(r,a){t.lc||(t.lc=function(){if("object"==typeof crypto&&"function"==typeof crypto.getRandomValues){var t=new Uint8Array(1);return()=>(crypto.getRandomValues(t),t[0])}if(_)try{var e=n(Object(function(){var t=new Error("Cannot find module \'crypto\'");throw t.code="MODULE_NOT_FOUND",t}()));return()=>e.randomBytes(1)[0]}catch(t){}return()=>at("randomDevice")}());for(var i=0;i<a;i++)e()[r+i>>0>>>0]=t.lc();return 0},ia:function(t,e,n){var r=Ee();try{return gt(t)(e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},ja:function(t,e,n){var r=Ee();try{return gt(t)(e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},K:function(t){var e=Ee();try{return gt(t)()}catch(t){if(Me(e),t!==t+0)throw t;Se(1,0)}},f:function(t,e){var n=Ee();try{return gt(t)(e)}catch(t){if(Me(n),t!==t+0)throw t;Se(1,0)}},P:function(t,e,n){var r=Ee();try{return gt(t)(e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},Q:function(t,e,n){var r=Ee();try{return gt(t)(e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},k:function(t,e,n){var r=Ee();try{return gt(t)(e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},p:function(t,e,n,r){var a=Ee();try{return gt(t)(e,n,r)}catch(t){if(Me(a),t!==t+0)throw t;Se(1,0)}},q:function(t,e,n,r,a){var i=Ee();try{return gt(t)(e,n,r,a)}catch(t){if(Me(i),t!==t+0)throw t;Se(1,0)}},N:function(t,e,n,r,a,i){var o=Ee();try{return gt(t)(e,n,r,a,i)}catch(t){if(Me(o),t!==t+0)throw t;Se(1,0)}},s:function(t,e,n,r,a,i){var o=Ee();try{return gt(t)(e,n,r,a,i)}catch(t){if(Me(o),t!==t+0)throw t;Se(1,0)}},w:function(t,e,n,r,a,i,o){var u=Ee();try{return gt(t)(e,n,r,a,i,o)}catch(t){if(Me(u),t!==t+0)throw t;Se(1,0)}},L:function(t,e,n,r,a,i,o,u){var c=Ee();try{return gt(t)(e,n,r,a,i,o,u)}catch(t){if(Me(c),t!==t+0)throw t;Se(1,0)}},E:function(t,e,n,r,a,i,o,u,c,s,l,f){var p=Ee();try{return gt(t)(e,n,r,a,i,o,u,c,s,l,f)}catch(t){if(Me(p),t!==t+0)throw t;Se(1,0)}},aa:function(t,e,n,r,a,i,o,u){var c=Ee();try{return He(t,e,n,r,a,i,o,u)}catch(t){if(Me(c),t!==t+0)throw t;Se(1,0)}},_:function(t,e,n,r,a,i,o){var u=Ee();try{return ke(t,e,n,r,a,i,o)}catch(t){if(Me(u),t!==t+0)throw t;Se(1,0)}},Z:function(t,e,n,r,a){var i=Ee();try{return Le(t,e,n,r,a)}catch(t){if(Me(i),t!==t+0)throw t;Se(1,0)}},ca:function(t,e,n,r){var a=Ee();try{return Ie(t,e,n,r)}catch(t){if(Me(a),t!==t+0)throw t;Se(1,0)}},$:function(t){var e=Ee();try{return je(t)}catch(t){if(Me(e),t!==t+0)throw t;Se(1,0)}},ba:function(t,e){var n=Ee();try{return We(t,e)}catch(t){if(Me(n),t!==t+0)throw t;Se(1,0)}},Y:function(t,e,n){var r=Ee();try{return De(t,e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},g:function(t){var e=Ee();try{gt(t)()}catch(t){if(Me(e),t!==t+0)throw t;Se(1,0)}},r:function(t,e){var n=Ee();try{gt(t)(e)}catch(t){if(Me(n),t!==t+0)throw t;Se(1,0)}},i:function(t,e,n){var r=Ee();try{gt(t)(e,n)}catch(t){if(Me(r),t!==t+0)throw t;Se(1,0)}},ha:function(t,e,n,r){var a=Ee();try{gt(t)(e,n,r)}catch(t){if(Me(a),t!==t+0)throw t;Se(1,0)}},m:function(t,e,n,r){var a=Ee();try{gt(t)(e,n,r)}catch(t){if(Me(a),t!==t+0)throw t;Se(1,0)}},v:function(t,e,n,r,a){var i=Ee();try{gt(t)(e,n,r,a)}catch(t){if(Me(i),t!==t+0)throw t;Se(1,0)}},u:function(t,e,n,r,a,i){var o=Ee();try{gt(t)(e,n,r,a,i)}catch(t){if(Me(o),t!==t+0)throw t;Se(1,0)}},O:function(t,e,n,r,a,i,o){var u=Ee();try{gt(t)(e,n,r,a,i,o)}catch(t){if(Me(u),t!==t+0)throw t;Se(1,0)}},A:function(t,e,n,r,a,i,o,u){var c=Ee();try{gt(t)(e,n,r,a,i,o,u)}catch(t){if(Me(c),t!==t+0)throw t;Se(1,0)}},ka:function(t,e,n,r,a,i,o,u,c){var s=Ee();try{gt(t)(e,n,r,a,i,o,u,c)}catch(t){if(Me(s),t!==t+0)throw t;Se(1,0)}},C:function(t,e,n,r,a,i,o,u,c,s,l){var f=Ee();try{gt(t)(e,n,r,a,i,o,u,c,s,l)}catch(t){if(Me(f),t!==t+0)throw t;Se(1,0)}},D:function(t,e,n,r,a,i,o,u,c,s,l,f,p,h,d,y){var b=Ee();try{gt(t)(e,n,r,a,i,o,u,c,s,l,f,p,h,d,y)}catch(t){if(Me(b),t!==t+0)throw t;Se(1,0)}},fa:function(t,e,n,r,a,i,o,u){var c=Ee();try{Pe(t,e,n,r,a,i,o,u)}catch(t){if(Me(c),t!==t+0)throw t;Se(1,0)}},da:function(t,e,n,r,a,i,o,u,c,s,l,f){var p=Ee();try{Fe(t,e,n,r,a,i,o,u,c,s,l,f)}catch(t){if(Me(p),t!==t+0)throw t;Se(1,0)}},ea:function(t,e,n,r,a,i){var o=Ee();try{Ue(t,e,n,r,a,i)}catch(t){if(Me(o),t!==t+0)throw t;Se(1,0)}},o:function(t){return t},a:j||u.wasmMemory,G:function(t){oe=t},la:le,z:function(t,e,n,r){return le(t,e,n,r)}};!function(){function t(t,e){u.asm=t.exports,ht.qc.push(u.asm.sb),$=u.asm.ub,X.unshift(u.asm.Va),k=e,O||(et--,u.monitorRunDependencies&&u.monitorRunDependencies(et),0==et&&(null!==nt&&(clearInterval(nt),nt=null),rt&&(t=rt,rt=null,t())))}function e(e){t(e.instance,e.module)}function n(t){return function(){if(!M&&(v||w)){if("function"==typeof fetch&&!tt.startsWith("file://"))return fetch(tt,{credentials:"same-origin"}).then((function(t){if(!t.ok)throw"failed to load wasm binary file at \'"+tt+"\'";return t.arrayBuffer()})).catch((function(){return ot()}));if(f)return new Promise((function(t,e){f(tt,(function(e){t(new Uint8Array(e))}),e)}))}return Promise.resolve().then((function(){return ot()}))}().then((function(t){return WebAssembly.instantiate(t,r)})).then((function(t){return t})).then(t,(function(t){x("failed to asynchronously prepare wasm: "+t),at(t)}))}var r={a:pe};if(O||(et++,u.monitorRunDependencies&&u.monitorRunDependencies(et)),u.instantiateWasm)try{return u.instantiateWasm(r,t)}catch(t){return x("Module.instantiateWasm callback failed with error: "+t),!1}(M||"function"!=typeof WebAssembly.instantiateStreaming||it()||tt.startsWith("file://")||_||"function"!=typeof fetch?n(e):fetch(tt,{credentials:"same-origin"}).then((function(t){return WebAssembly.instantiateStreaming(t,r).then(e,(function(t){return x("wasm streaming compile failed: "+t),x("falling back to ArrayBuffer instantiation"),n(e)}))}))).catch(s)}(),u.___wasm_call_ctors=function(){return(u.___wasm_call_ctors=u.asm.Va).apply(null,arguments)},u._OrtInit=function(){return(u._OrtInit=u.asm.Wa).apply(null,arguments)},u._OrtCreateSessionOptions=function(){return(u._OrtCreateSessionOptions=u.asm.Xa).apply(null,arguments)},u._OrtAppendExecutionProvider=function(){return(u._OrtAppendExecutionProvider=u.asm.Ya).apply(null,arguments)},u._OrtAddSessionConfigEntry=function(){return(u._OrtAddSessionConfigEntry=u.asm.Za).apply(null,arguments)},u._OrtReleaseSessionOptions=function(){return(u._OrtReleaseSessionOptions=u.asm._a).apply(null,arguments)},u._OrtCreateSession=function(){return(u._OrtCreateSession=u.asm.$a).apply(null,arguments)},u._OrtReleaseSession=function(){return(u._OrtReleaseSession=u.asm.ab).apply(null,arguments)},u._OrtGetInputCount=function(){return(u._OrtGetInputCount=u.asm.bb).apply(null,arguments)},u._OrtGetOutputCount=function(){return(u._OrtGetOutputCount=u.asm.cb).apply(null,arguments)},u._OrtGetInputName=function(){return(u._OrtGetInputName=u.asm.db).apply(null,arguments)},u._OrtGetOutputName=function(){return(u._OrtGetOutputName=u.asm.eb).apply(null,arguments)},u._OrtFree=function(){return(u._OrtFree=u.asm.fb).apply(null,arguments)},u._OrtCreateTensor=function(){return(u._OrtCreateTensor=u.asm.gb).apply(null,arguments)},u._OrtGetTensorData=function(){return(u._OrtGetTensorData=u.asm.hb).apply(null,arguments)},u._OrtReleaseTensor=function(){return(u._OrtReleaseTensor=u.asm.ib).apply(null,arguments)},u._OrtCreateRunOptions=function(){return(u._OrtCreateRunOptions=u.asm.jb).apply(null,arguments)},u._OrtAddRunConfigEntry=function(){return(u._OrtAddRunConfigEntry=u.asm.kb).apply(null,arguments)},u._OrtReleaseRunOptions=function(){return(u._OrtReleaseRunOptions=u.asm.lb).apply(null,arguments)},u._OrtRun=function(){return(u._OrtRun=u.asm.mb).apply(null,arguments)},u._OrtEndProfiling=function(){return(u._OrtEndProfiling=u.asm.nb).apply(null,arguments)};var he=u._pthread_self=function(){return(he=u._pthread_self=u.asm.ob).apply(null,arguments)},de=u._malloc=function(){return(de=u._malloc=u.asm.pb).apply(null,arguments)},ye=u._free=function(){return(ye=u._free=u.asm.qb).apply(null,arguments)},be=u._fflush=function(){return(be=u._fflush=u.asm.rb).apply(null,arguments)};u.__emscripten_tls_init=function(){return(u.__emscripten_tls_init=u.asm.sb).apply(null,arguments)};var me=u.___funcs_on_exit=function(){return(me=u.___funcs_on_exit=u.asm.tb).apply(null,arguments)},ge=u.__emscripten_thread_init=function(){return(ge=u.__emscripten_thread_init=u.asm.vb).apply(null,arguments)};u.__emscripten_thread_crashed=function(){return(u.__emscripten_thread_crashed=u.asm.wb).apply(null,arguments)};var ve,we=u._emscripten_run_in_main_runtime_thread_js=function(){return(we=u._emscripten_run_in_main_runtime_thread_js=u.asm.xb).apply(null,arguments)},_e=u.__emscripten_proxy_execute_task_queue=function(){return(_e=u.__emscripten_proxy_execute_task_queue=u.asm.yb).apply(null,arguments)},Oe=u.__emscripten_thread_free_data=function(){return(Oe=u.__emscripten_thread_free_data=u.asm.zb).apply(null,arguments)},Ae=u.__emscripten_thread_exit=function(){return(Ae=u.__emscripten_thread_exit=u.asm.Ab).apply(null,arguments)},Se=u._setThrew=function(){return(Se=u._setThrew=u.asm.Bb).apply(null,arguments)},Te=u._emscripten_stack_set_limits=function(){return(Te=u._emscripten_stack_set_limits=u.asm.Cb).apply(null,arguments)},Ee=u.stackSave=function(){return(Ee=u.stackSave=u.asm.Db).apply(null,arguments)},Me=u.stackRestore=function(){return(Me=u.stackRestore=u.asm.Eb).apply(null,arguments)},Ce=u.stackAlloc=function(){return(Ce=u.stackAlloc=u.asm.Fb).apply(null,arguments)},xe=u.___cxa_can_catch=function(){return(xe=u.___cxa_can_catch=u.asm.Gb).apply(null,arguments)},Re=u.___cxa_is_pointer_type=function(){return(Re=u.___cxa_is_pointer_type=u.asm.Hb).apply(null,arguments)},je=u.dynCall_j=function(){return(je=u.dynCall_j=u.asm.Ib).apply(null,arguments)},ke=u.dynCall_iiiiij=function(){return(ke=u.dynCall_iiiiij=u.asm.Jb).apply(null,arguments)},De=u.dynCall_jii=function(){return(De=u.dynCall_jii=u.asm.Kb).apply(null,arguments)},Pe=u.dynCall_viiiiij=function(){return(Pe=u.dynCall_viiiiij=u.asm.Lb).apply(null,arguments)},Ue=u.dynCall_vjji=function(){return(Ue=u.dynCall_vjji=u.asm.Mb).apply(null,arguments)},Fe=u.dynCall_viiijjjii=function(){return(Fe=u.dynCall_viiijjjii=u.asm.Nb).apply(null,arguments)},Ie=u.dynCall_iij=function(){return(Ie=u.dynCall_iij=u.asm.Ob).apply(null,arguments)},We=u.dynCall_ji=function(){return(We=u.dynCall_ji=u.asm.Pb).apply(null,arguments)},He=u.dynCall_iiiiiij=function(){return(He=u.dynCall_iiiiiij=u.asm.Qb).apply(null,arguments)},Le=u.dynCall_iiij=function(){return(Le=u.dynCall_iiij=u.asm.Rb).apply(null,arguments)};function ze(){function t(){if(!ve&&(ve=!0,u.calledRun=!0,!H)&&(O||dt(X),c(u),u.onRuntimeInitialized&&u.onRuntimeInitialized(),!O)){if(u.postRun)for("function"==typeof u.postRun&&(u.postRun=[u.postRun]);u.postRun.length;){var t=u.postRun.shift();Z.unshift(t)}dt(Z)}}if(!(0<et))if(O)c(u),O||dt(X),postMessage({cmd:"loaded"});else{if(u.preRun)for("function"==typeof u.preRun&&(u.preRun=[u.preRun]);u.preRun.length;)K();dt(q),0<et||(u.setStatus?(u.setStatus("Running..."),setTimeout((function(){setTimeout((function(){u.setStatus("")}),1),t()}),1)):t())}}if(u.UTF8ToString=Y,u.stringToUTF8=function(t,e,n){return B(t,r(),e,n)},u.lengthBytesUTF8=G,u.keepRuntimeAlive=Q,u.wasmMemory=j,u.stackSave=Ee,u.stackRestore=Me,u.stackAlloc=Ce,u.ExitStatus=ct,u.PThread=ht,rt=function t(){ve||ze(),ve||(rt=t)},u.preInit)for("function"==typeof u.preInit&&(u.preInit=[u.preInit]);0<u.preInit.length;)u.preInit.pop()();return ze(),t.ready});t.exports=r},932:(t,e,n)=>{var _scriptDir,r=(_scriptDir=(_scriptDir="undefined"!=typeof document&&document.currentScript?document.currentScript.src:void 0)||"/index.js",function(t){var e,r,a;t=t||{},e||(e=void 0!==t?t:{}),e.ready=new Promise((function(t,e){r=t,a=e}));var i,o,u,c,s,l,f=Object.assign({},e),p="./this.program",h=(t,e)=>{throw e},d="object"==typeof window,y="function"==typeof importScripts,b="object"==typeof process&&"object"==typeof process.versions&&"string"==typeof process.versions.node,m="";b?(m=y?n(908).dirname(m)+"/":"//",l=()=>{s||(c=n(384),s=n(908))},i=function(t,e){return l(),t=s.normalize(t),c.readFileSync(t,e?void 0:"utf8")},u=t=>((t=i(t,!0)).buffer||(t=new Uint8Array(t)),t),o=(t,e,n)=>{l(),t=s.normalize(t),c.readFile(t,(function(t,r){t?n(t):e(r.buffer)}))},1<process.argv.length&&(p=process.argv[1].replace(/\\\\/g,"/")),process.argv.slice(2),process.on("uncaughtException",(function(t){if(!(t instanceof J))throw t})),process.on("unhandledRejection",(function(t){throw t})),h=(t,e)=>{if(_||0<L)throw process.exitCode=t,e;e instanceof J||w("exiting due to exception: "+e),process.exit(t)},e.inspect=function(){return"[Emscripten Module object]"}):(d||y)&&(y?m=self.location.href:"undefined"!=typeof document&&document.currentScript&&(m=document.currentScript.src),_scriptDir&&(m=_scriptDir),m=0!==m.indexOf("blob:")?m.substr(0,m.replace(/[?#].*/,"").lastIndexOf("/")+1):"",i=t=>{var e=new XMLHttpRequest;return e.open("GET",t,!1),e.send(null),e.responseText},y&&(u=t=>{var e=new XMLHttpRequest;return e.open("GET",t,!1),e.responseType="arraybuffer",e.send(null),new Uint8Array(e.response)}),o=(t,e,n)=>{var r=new XMLHttpRequest;r.open("GET",t,!0),r.responseType="arraybuffer",r.onload=()=>{200==r.status||0==r.status&&r.response?e(r.response):n()},r.onerror=n,r.send(null)});var g,v=e.print||console.log.bind(console),w=e.printErr||console.warn.bind(console);Object.assign(e,f),f=null,e.thisProgram&&(p=e.thisProgram),e.quit&&(h=e.quit),e.wasmBinary&&(g=e.wasmBinary);var _=e.noExitRuntime||!1;"object"!=typeof WebAssembly&&V("no native wasm support detected");var O,A,S,T,E,M,C=!1,x="undefined"!=typeof TextDecoder?new TextDecoder("utf8"):void 0;function R(t,e,n){var r=(e>>>=0)+n;for(n=e;t[n]&&!(n>=r);)++n;if(16<n-e&&t.buffer&&x)return x.decode(t.subarray(e,n));for(r="";e<n;){var a=t[e++];if(128&a){var i=63&t[e++];if(192==(224&a))r+=String.fromCharCode((31&a)<<6|i);else{var o=63&t[e++];65536>(a=224==(240&a)?(15&a)<<12|i<<6|o:(7&a)<<18|i<<12|o<<6|63&t[e++])?r+=String.fromCharCode(a):(a-=65536,r+=String.fromCharCode(55296|a>>10,56320|1023&a))}}else r+=String.fromCharCode(a)}return r}function j(t,e){return(t>>>=0)?R(T,t,e):""}function k(t,e,n,r){if(!(0<r))return 0;var a=n>>>=0;r=n+r-1;for(var i=0;i<t.length;++i){var o=t.charCodeAt(i);if(55296<=o&&57343>=o&&(o=65536+((1023&o)<<10)|1023&t.charCodeAt(++i)),127>=o){if(n>=r)break;e[n++>>>0]=o}else{if(2047>=o){if(n+1>=r)break;e[n++>>>0]=192|o>>6}else{if(65535>=o){if(n+2>=r)break;e[n++>>>0]=224|o>>12}else{if(n+3>=r)break;e[n++>>>0]=240|o>>18,e[n++>>>0]=128|o>>12&63}e[n++>>>0]=128|o>>6&63}e[n++>>>0]=128|63&o}}return e[n>>>0]=0,n-a}function D(t){for(var e=0,n=0;n<t.length;++n){var r=t.charCodeAt(n);127>=r?e++:2047>=r?e+=2:55296<=r&&57343>=r?(e+=4,++n):e+=3}return e}function P(){var t=O.buffer;A=t,e.HEAP8=S=new Int8Array(t),e.HEAP16=new Int16Array(t),e.HEAP32=E=new Int32Array(t),e.HEAPU8=T=new Uint8Array(t),e.HEAPU16=new Uint16Array(t),e.HEAPU32=M=new Uint32Array(t),e.HEAPF32=new Float32Array(t),e.HEAPF64=new Float64Array(t)}var U,F=[],I=[],W=[],H=[],L=0;function z(){var t=e.preRun.shift();F.unshift(t)}var Y,B=0,G=null,N=null;function V(t){throw e.onAbort&&e.onAbort(t),w(t="Aborted("+t+")"),C=!0,t=new WebAssembly.RuntimeError(t+". Build with -sASSERTIONS for more info."),a(t),t}function $(){return Y.startsWith("data:application/octet-stream;base64,")}if(Y="ort-wasm.wasm",!$()){var q=Y;Y=e.locateFile?e.locateFile(q,m):m+q}function X(){var t=Y;try{if(t==Y&&g)return new Uint8Array(g);if(u)return u(t);throw"both async and sync fetching of the wasm failed"}catch(t){V(t)}}function J(t){this.name="ExitStatus",this.message="Program terminated with exit("+t+")",this.status=t}function Z(t){for(;0<t.length;)t.shift()(e)}var Q=[],K=0,tt=0;function et(t){this.Db=t,this.zb=t-24,this.Ub=function(t){M[this.zb+4>>2>>>0]=t},this.Eb=function(){return M[this.zb+4>>2>>>0]},this.Sb=function(t){M[this.zb+8>>2>>>0]=t},this.Wb=function(){return M[this.zb+8>>2>>>0]},this.Tb=function(){E[this.zb>>2>>>0]=0},this.Ib=function(t){S[this.zb+12>>0>>>0]=t?1:0},this.Pb=function(){return 0!=S[this.zb+12>>0>>>0]},this.Jb=function(t){S[this.zb+13>>0>>>0]=t?1:0},this.Lb=function(){return 0!=S[this.zb+13>>0>>>0]},this.Rb=function(t,e){this.Fb(0),this.Ub(t),this.Sb(e),this.Tb(),this.Ib(!1),this.Jb(!1)},this.Nb=function(){E[this.zb>>2>>>0]+=1},this.Xb=function(){var t=E[this.zb>>2>>>0];return E[this.zb>>2>>>0]=t-1,1===t},this.Fb=function(t){M[this.zb+16>>2>>>0]=t},this.Ob=function(){return M[this.zb+16>>2>>>0]},this.Qb=function(){if(Mt(this.Eb()))return M[this.Db>>2>>>0];var t=this.Ob();return 0!==t?t:this.Db}}function nt(t){return vt(new et(t).zb)}var rt=[];function at(t){var e=rt[t];return e||(t>=rt.length&&(rt.length=t+1),rt[t]=e=U.get(t)),e}function it(t){var e=D(t)+1,n=gt(e);return n&&k(t,S,n,e),n}var ot={};function ut(){if(!ct){var t,e={USER:"web_user",LOGNAME:"web_user",PATH:"/",PWD:"/",HOME:"/home/web_user",LANG:("object"==typeof navigator&&navigator.languages&&navigator.languages[0]||"C").replace("-","_")+".UTF-8",_:p||"./this.program"};for(t in ot)void 0===ot[t]?delete e[t]:e[t]=ot[t];var n=[];for(t in e)n.push(t+"="+e[t]);ct=n}return ct}var ct,st=[null,[],[]];function lt(t,e){var n=st[t];0===e||10===e?((1===t?v:w)(R(n,0)),n.length=0):n.push(e)}var ft=0;function pt(t){return 0==t%4&&(0!=t%100||0==t%400)}var ht=[31,29,31,30,31,30,31,31,30,31,30,31],dt=[31,28,31,30,31,30,31,31,30,31,30,31];function yt(t,e,n,r){function a(t,e,n){for(t="number"==typeof t?t.toString():t||"";t.length<e;)t=n[0]+t;return t}function i(t,e){return a(t,e,"0")}function o(t,e){function n(t){return 0>t?-1:0<t?1:0}var r;return 0===(r=n(t.getFullYear()-e.getFullYear()))&&0===(r=n(t.getMonth()-e.getMonth()))&&(r=n(t.getDate()-e.getDate())),r}function u(t){switch(t.getDay()){case 0:return new Date(t.getFullYear()-1,11,29);case 1:return t;case 2:return new Date(t.getFullYear(),0,3);case 3:return new Date(t.getFullYear(),0,2);case 4:return new Date(t.getFullYear(),0,1);case 5:return new Date(t.getFullYear()-1,11,31);case 6:return new Date(t.getFullYear()-1,11,30)}}function c(t){var e=t.Bb;for(t=new Date(new Date(t.Cb+1900,0,1).getTime());0<e;){var n=t.getMonth(),r=(pt(t.getFullYear())?ht:dt)[n];if(!(e>r-t.getDate())){t.setDate(t.getDate()+e);break}e-=r-t.getDate()+1,t.setDate(1),11>n?t.setMonth(n+1):(t.setMonth(0),t.setFullYear(t.getFullYear()+1))}return n=new Date(t.getFullYear()+1,0,4),e=u(new Date(t.getFullYear(),0,4)),n=u(n),0>=o(e,t)?0>=o(n,t)?t.getFullYear()+1:t.getFullYear():t.getFullYear()-1}var s=E[r+40>>2>>>0];for(var l in r={$b:E[r>>2>>>0],Zb:E[r+4>>2>>>0],Gb:E[r+8>>2>>>0],Kb:E[r+12>>2>>>0],Hb:E[r+16>>2>>>0],Cb:E[r+20>>2>>>0],Ab:E[r+24>>2>>>0],Bb:E[r+28>>2>>>0],bc:E[r+32>>2>>>0],Yb:E[r+36>>2>>>0],ac:s?j(s):""},n=j(n),s={"%c":"%a %b %d %H:%M:%S %Y","%D":"%m/%d/%y","%F":"%Y-%m-%d","%h":"%b","%r":"%I:%M:%S %p","%R":"%H:%M","%T":"%H:%M:%S","%x":"%m/%d/%y","%X":"%H:%M:%S","%Ec":"%c","%EC":"%C","%Ex":"%m/%d/%y","%EX":"%H:%M:%S","%Ey":"%y","%EY":"%Y","%Od":"%d","%Oe":"%e","%OH":"%H","%OI":"%I","%Om":"%m","%OM":"%M","%OS":"%S","%Ou":"%u","%OU":"%U","%OV":"%V","%Ow":"%w","%OW":"%W","%Oy":"%y"})n=n.replace(new RegExp(l,"g"),s[l]);var f="Sunday Monday Tuesday Wednesday Thursday Friday Saturday".split(" "),p="January February March April May June July August September October November December".split(" ");for(l in s={"%a":function(t){return f[t.Ab].substring(0,3)},"%A":function(t){return f[t.Ab]},"%b":function(t){return p[t.Hb].substring(0,3)},"%B":function(t){return p[t.Hb]},"%C":function(t){return i((t.Cb+1900)/100|0,2)},"%d":function(t){return i(t.Kb,2)},"%e":function(t){return a(t.Kb,2," ")},"%g":function(t){return c(t).toString().substring(2)},"%G":function(t){return c(t)},"%H":function(t){return i(t.Gb,2)},"%I":function(t){return 0==(t=t.Gb)?t=12:12<t&&(t-=12),i(t,2)},"%j":function(t){for(var e=0,n=0;n<=t.Hb-1;e+=(pt(t.Cb+1900)?ht:dt)[n++]);return i(t.Kb+e,3)},"%m":function(t){return i(t.Hb+1,2)},"%M":function(t){return i(t.Zb,2)},"%n":function(){return"\\n"},"%p":function(t){return 0<=t.Gb&&12>t.Gb?"AM":"PM"},"%S":function(t){return i(t.$b,2)},"%t":function(){return"\\t"},"%u":function(t){return t.Ab||7},"%U":function(t){return i(Math.floor((t.Bb+7-t.Ab)/7),2)},"%V":function(t){var e=Math.floor((t.Bb+7-(t.Ab+6)%7)/7);if(2>=(t.Ab+371-t.Bb-2)%7&&e++,e)53==e&&(4==(n=(t.Ab+371-t.Bb)%7)||3==n&&pt(t.Cb)||(e=1));else{e=52;var n=(t.Ab+7-t.Bb-1)%7;(4==n||5==n&&pt(t.Cb%400-1))&&e++}return i(e,2)},"%w":function(t){return t.Ab},"%W":function(t){return i(Math.floor((t.Bb+7-(t.Ab+6)%7)/7),2)},"%y":function(t){return(t.Cb+1900).toString().substring(2)},"%Y":function(t){return t.Cb+1900},"%z":function(t){var e=0<=(t=t.Yb);return t=Math.abs(t)/60,(e?"+":"-")+String("0000"+(t/60*100+t%60)).slice(-4)},"%Z":function(t){return t.ac},"%%":function(){return"%"}},n=n.replace(/%%/g,"\\0\\0"),s)n.includes(l)&&(n=n.replace(new RegExp(l,"g"),s[l](r)));return l=function(t){var e=Array(D(t)+1);return k(t,e,0,e.length),e}(n=n.replace(/\\0\\0/g,"%")),l.length>e?0:(S.set(l,t>>>0),l.length-1)}var bt={a:function(t){return gt(t+24)+24},m:function(t){return(t=new et(t)).Pb()||(t.Ib(!0),K--),t.Jb(!1),Q.push(t),t.Nb(),t.Qb()},ia:function(t){throw w("Unexpected exception thrown, this is not properly supported - aborting"),C=!0,t},w:function(){Ot(0);var t=Q.pop();if(t.Xb()&&!t.Lb()){var e=t.Wb();e&&at(e)(t.Db),nt(t.Db)}tt=0},d:function(){var t=tt;if(!t)return ft=0;var e=new et(t);e.Fb(t);var n=e.Eb();if(!n)return ft=0,t;for(var r=Array.prototype.slice.call(arguments),a=0;a<r.length;a++){var i=r[a];if(0===i||i===n)break;if(Et(i,n,e.zb+16))return ft=i,t}return ft=n,t},k:function(){var t=tt;if(!t)return ft=0;var e=new et(t);e.Fb(t);var n=e.Eb();if(!n)return ft=0,t;for(var r=Array.prototype.slice.call(arguments),a=0;a<r.length;a++){var i=r[a];if(0===i||i===n)break;if(Et(i,n,e.zb+16))return ft=i,t}return ft=n,t},g:function(){var t=tt;if(!t)return ft=0;var e=new et(t);e.Fb(t);var n=e.Eb();if(!n)return ft=0,t;for(var r=Array.prototype.slice.call(arguments),a=0;a<r.length;a++){var i=r[a];if(0===i||i===n)break;if(Et(i,n,e.zb+16))return ft=i,t}return ft=n,t},s:nt,L:function(){var t=Q.pop();t||V("no exception to throw");var e=t.Db;throw t.Lb()||(Q.push(t),t.Jb(!0),t.Ib(!1),K++),tt=e,e},b:function(t,e,n){throw new et(t).Rb(e,n),tt=t,K++,t},la:function(){return K},i:function(t){throw tt||(tt=t),t},H:function(){return 0},Ba:function(){},pa:function(){},ra:function(){},ka:function(){return 0},za:function(){},ua:function(){},ya:function(){},R:function(){},qa:function(){},na:function(){},Aa:function(){},oa:function(){},Ha:function(){},Ja:function(){V("To use dlopen, you need enable dynamic linking, see https://github.com/emscripten-core/emscripten/wiki/Linking")},Ia:function(){V("To use dlopen, you need enable dynamic linking, see https://github.com/emscripten-core/emscripten/wiki/Linking")},S:function(){return Date.now()},Ca:function(){return!0},Da:function(t,e){t=new Date(1e3*(M[t>>>2]+4294967296*E[t+4>>>2])),E[e>>2>>>0]=t.getUTCSeconds(),E[e+4>>2>>>0]=t.getUTCMinutes(),E[e+8>>2>>>0]=t.getUTCHours(),E[e+12>>2>>>0]=t.getUTCDate(),E[e+16>>2>>>0]=t.getUTCMonth(),E[e+20>>2>>>0]=t.getUTCFullYear()-1900,E[e+24>>2>>>0]=t.getUTCDay(),E[e+28>>2>>>0]=(t.getTime()-Date.UTC(t.getUTCFullYear(),0,1,0,0,0,0))/864e5|0},Ea:function(t,e){t=new Date(1e3*(M[t>>>2]+4294967296*E[t+4>>>2])),E[e>>2>>>0]=t.getSeconds(),E[e+4>>2>>>0]=t.getMinutes(),E[e+8>>2>>>0]=t.getHours(),E[e+12>>2>>>0]=t.getDate(),E[e+16>>2>>>0]=t.getMonth(),E[e+20>>2>>>0]=t.getFullYear()-1900,E[e+24>>2>>>0]=t.getDay();var n=new Date(t.getFullYear(),0,1);E[e+28>>2>>>0]=(t.getTime()-n.getTime())/864e5|0,E[e+36>>2>>>0]=-60*t.getTimezoneOffset();var r=new Date(t.getFullYear(),6,1).getTimezoneOffset();n=n.getTimezoneOffset(),E[e+32>>2>>>0]=0|(r!=n&&t.getTimezoneOffset()==Math.min(n,r))},Fa:function(t){var e=new Date(E[t+20>>2>>>0]+1900,E[t+16>>2>>>0],E[t+12>>2>>>0],E[t+8>>2>>>0],E[t+4>>2>>>0],E[t>>2>>>0],0),n=E[t+32>>2>>>0],r=e.getTimezoneOffset(),a=new Date(e.getFullYear(),0,1),i=new Date(e.getFullYear(),6,1).getTimezoneOffset(),o=a.getTimezoneOffset(),u=Math.min(o,i);return 0>n?E[t+32>>2>>>0]=Number(i!=o&&u==r):0<n!=(u==r)&&(i=Math.max(o,i),e.setTime(e.getTime()+6e4*((0<n?u:i)-r))),E[t+24>>2>>>0]=e.getDay(),E[t+28>>2>>>0]=(e.getTime()-a.getTime())/864e5|0,E[t>>2>>>0]=e.getSeconds(),E[t+4>>2>>>0]=e.getMinutes(),E[t+8>>2>>>0]=e.getHours(),E[t+12>>2>>>0]=e.getDate(),E[t+16>>2>>>0]=e.getMonth(),e.getTime()/1e3|0},sa:function(){return-52},ta:function(){},Ga:function t(e,n,r){t.Vb||(t.Vb=!0,function(t,e,n){function r(t){return(t=t.toTimeString().match(/\\(([A-Za-z ]+)\\)$/))?t[1]:"GMT"}var a=(new Date).getFullYear(),i=new Date(a,0,1),o=new Date(a,6,1);a=i.getTimezoneOffset();var u=o.getTimezoneOffset();E[t>>2>>>0]=60*Math.max(a,u),E[e>>2>>>0]=Number(a!=u),t=r(i),e=r(o),t=it(t),e=it(e),u<a?(M[n>>2>>>0]=t,M[n+4>>2>>>0]=e):(M[n>>2>>>0]=e,M[n+4>>2>>>0]=t)}(e,n,r))},B:function(){V("")},ma:function(){return 4294901760},I:b?()=>{var t=process.hrtime();return 1e3*t[0]+t[1]/1e6}:()=>performance.now(),xa:function(t,e,n){T.copyWithin(t>>>0,e>>>0,e+n>>>0)},G:function(t){var e=T.length;if(4294901760<(t>>>=0))return!1;for(var n=1;4>=n;n*=2){var r=e*(1+.2/n);r=Math.min(r,t+100663296);var a=Math;r=Math.max(t,r),a=a.min.call(a,4294901760,r+(65536-r%65536)%65536);t:{try{O.grow(a-A.byteLength+65535>>>16),P();var i=1;break t}catch(t){}i=void 0}if(i)return!0}return!1},va:function(t,e){var n=0;return ut().forEach((function(r,a){var i=e+n;for(a=M[t+4*a>>2>>>0]=i,i=0;i<r.length;++i)S[a++>>0>>>0]=r.charCodeAt(i);S[a>>0>>>0]=0,n+=r.length+1})),0},wa:function(t,e){var n=ut();M[t>>2>>>0]=n.length;var r=0;return n.forEach((function(t){r+=t.length+1})),M[e>>2>>>0]=r,0},ba:function(t){_||0<L||(_t(),Z(W),wt(0),st[1].length&<(1,10),st[2].length&<(2,10)),_||0<L||(e.onExit&&e.onExit(t),C=!0),h(t,new J(t))},E:function(){return 52},Q:function(){return 52},ca:function(){return 70},P:function(t,e,n,r){for(var a=0,i=0;i<n;i++){var o=M[e>>2>>>0],u=M[e+4>>2>>>0];e+=8;for(var c=0;c<u;c++)lt(t,T[o+c>>>0]);a+=u}return M[r>>2>>>0]=a,0},c:function(){return ft},ja:function t(e,r){t.Mb||(t.Mb=function(){if("object"==typeof crypto&&"function"==typeof crypto.getRandomValues){var t=new Uint8Array(1);return()=>(crypto.getRandomValues(t),t[0])}if(b)try{var e=n(Object(function(){var t=new Error("Cannot find module \'crypto\'");throw t.code="MODULE_NOT_FOUND",t}()));return()=>e.randomBytes(1)[0]}catch(t){}return()=>V("randomDevice")}());for(var a=0;a<r;a++)S[e+a>>0>>>0]=t.Mb();return 0},ea:function(t,e,n){var r=At();try{return at(t)(e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},fa:function(t,e,n){var r=At();try{return at(t)(e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},J:function(t){var e=At();try{return at(t)()}catch(t){if(St(e),t!==t+0)throw t;Ot(1,0)}},e:function(t,e){var n=At();try{return at(t)(e)}catch(t){if(St(n),t!==t+0)throw t;Ot(1,0)}},N:function(t,e,n){var r=At();try{return at(t)(e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},O:function(t,e,n){var r=At();try{return at(t)(e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},j:function(t,e,n){var r=At();try{return at(t)(e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},o:function(t,e,n,r){var a=At();try{return at(t)(e,n,r)}catch(t){if(St(a),t!==t+0)throw t;Ot(1,0)}},p:function(t,e,n,r,a){var i=At();try{return at(t)(e,n,r,a)}catch(t){if(St(i),t!==t+0)throw t;Ot(1,0)}},M:function(t,e,n,r,a,i){var o=At();try{return at(t)(e,n,r,a,i)}catch(t){if(St(o),t!==t+0)throw t;Ot(1,0)}},r:function(t,e,n,r,a,i){var o=At();try{return at(t)(e,n,r,a,i)}catch(t){if(St(o),t!==t+0)throw t;Ot(1,0)}},v:function(t,e,n,r,a,i,o){var u=At();try{return at(t)(e,n,r,a,i,o)}catch(t){if(St(u),t!==t+0)throw t;Ot(1,0)}},K:function(t,e,n,r,a,i,o,u){var c=At();try{return at(t)(e,n,r,a,i,o,u)}catch(t){if(St(c),t!==t+0)throw t;Ot(1,0)}},D:function(t,e,n,r,a,i,o,u,c,s,l,f){var p=At();try{return at(t)(e,n,r,a,i,o,u,c,s,l,f)}catch(t){if(St(p),t!==t+0)throw t;Ot(1,0)}},X:function(t,e,n,r,a,i,o,u){var c=At();try{return Ft(t,e,n,r,a,i,o,u)}catch(t){if(St(c),t!==t+0)throw t;Ot(1,0)}},V:function(t,e,n,r,a,i,o){var u=At();try{return xt(t,e,n,r,a,i,o)}catch(t){if(St(u),t!==t+0)throw t;Ot(1,0)}},U:function(t,e,n,r,a){var i=At();try{return It(t,e,n,r,a)}catch(t){if(St(i),t!==t+0)throw t;Ot(1,0)}},Z:function(t,e,n,r){var a=At();try{return Pt(t,e,n,r)}catch(t){if(St(a),t!==t+0)throw t;Ot(1,0)}},W:function(t){var e=At();try{return Ct(t)}catch(t){if(St(e),t!==t+0)throw t;Ot(1,0)}},Y:function(t,e){var n=At();try{return Ut(t,e)}catch(t){if(St(n),t!==t+0)throw t;Ot(1,0)}},T:function(t,e,n){var r=At();try{return Rt(t,e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},f:function(t){var e=At();try{at(t)()}catch(t){if(St(e),t!==t+0)throw t;Ot(1,0)}},q:function(t,e){var n=At();try{at(t)(e)}catch(t){if(St(n),t!==t+0)throw t;Ot(1,0)}},h:function(t,e,n){var r=At();try{at(t)(e,n)}catch(t){if(St(r),t!==t+0)throw t;Ot(1,0)}},da:function(t,e,n,r){var a=At();try{at(t)(e,n,r)}catch(t){if(St(a),t!==t+0)throw t;Ot(1,0)}},l:function(t,e,n,r){var a=At();try{at(t)(e,n,r)}catch(t){if(St(a),t!==t+0)throw t;Ot(1,0)}},t:function(t,e,n,r,a){var i=At();try{at(t)(e,n,r,a)}catch(t){if(St(i),t!==t+0)throw t;Ot(1,0)}},u:function(t,e,n,r,a,i){var o=At();try{at(t)(e,n,r,a,i)}catch(t){if(St(o),t!==t+0)throw t;Ot(1,0)}},x:function(t,e,n,r,a,i,o){var u=At();try{at(t)(e,n,r,a,i,o)}catch(t){if(St(u),t!==t+0)throw t;Ot(1,0)}},z:function(t,e,n,r,a,i,o,u){var c=At();try{at(t)(e,n,r,a,i,o,u)}catch(t){if(St(c),t!==t+0)throw t;Ot(1,0)}},ga:function(t,e,n,r,a,i,o,u,c){var s=At();try{at(t)(e,n,r,a,i,o,u,c)}catch(t){if(St(s),t!==t+0)throw t;Ot(1,0)}},A:function(t,e,n,r,a,i,o,u,c,s,l){var f=At();try{at(t)(e,n,r,a,i,o,u,c,s,l)}catch(t){if(St(f),t!==t+0)throw t;Ot(1,0)}},C:function(t,e,n,r,a,i,o,u,c,s,l,f,p,h,d,y){var b=At();try{at(t)(e,n,r,a,i,o,u,c,s,l,f,p,h,d,y)}catch(t){if(St(b),t!==t+0)throw t;Ot(1,0)}},aa:function(t,e,n,r,a,i,o,u){var c=At();try{jt(t,e,n,r,a,i,o,u)}catch(t){if(St(c),t!==t+0)throw t;Ot(1,0)}},_:function(t,e,n,r,a,i,o,u,c,s,l,f){var p=At();try{Dt(t,e,n,r,a,i,o,u,c,s,l,f)}catch(t){if(St(p),t!==t+0)throw t;Ot(1,0)}},$:function(t,e,n,r,a,i){var o=At();try{kt(t,e,n,r,a,i)}catch(t){if(St(o),t!==t+0)throw t;Ot(1,0)}},n:function(t){return t},F:function(t){ft=t},ha:yt,y:function(t,e,n,r){return yt(t,e,n,r)}};!function(){function t(t){e.asm=t.exports,O=e.asm.Ka,P(),U=e.asm.ib,I.unshift(e.asm.La),B--,e.monitorRunDependencies&&e.monitorRunDependencies(B),0==B&&(null!==G&&(clearInterval(G),G=null),N&&(t=N,N=null,t()))}function n(e){t(e.instance)}function r(t){return function(){if(!g&&(d||y)){if("function"==typeof fetch&&!Y.startsWith("file://"))return fetch(Y,{credentials:"same-origin"}).then((function(t){if(!t.ok)throw"failed to load wasm binary file at \'"+Y+"\'";return t.arrayBuffer()})).catch((function(){return X()}));if(o)return new Promise((function(t,e){o(Y,(function(e){t(new Uint8Array(e))}),e)}))}return Promise.resolve().then((function(){return X()}))}().then((function(t){return WebAssembly.instantiate(t,i)})).then((function(t){return t})).then(t,(function(t){w("failed to asynchronously prepare wasm: "+t),V(t)}))}var i={a:bt};if(B++,e.monitorRunDependencies&&e.monitorRunDependencies(B),e.instantiateWasm)try{return e.instantiateWasm(i,t)}catch(t){return w("Module.instantiateWasm callback failed with error: "+t),!1}(g||"function"!=typeof WebAssembly.instantiateStreaming||$()||Y.startsWith("file://")||b||"function"!=typeof fetch?r(n):fetch(Y,{credentials:"same-origin"}).then((function(t){return WebAssembly.instantiateStreaming(t,i).then(n,(function(t){return w("wasm streaming compile failed: "+t),w("falling back to ArrayBuffer instantiation"),r(n)}))}))).catch(a)}(),e.___wasm_call_ctors=function(){return(e.___wasm_call_ctors=e.asm.La).apply(null,arguments)},e._OrtInit=function(){return(e._OrtInit=e.asm.Ma).apply(null,arguments)},e._OrtCreateSessionOptions=function(){return(e._OrtCreateSessionOptions=e.asm.Na).apply(null,arguments)},e._OrtAppendExecutionProvider=function(){return(e._OrtAppendExecutionProvider=e.asm.Oa).apply(null,arguments)},e._OrtAddSessionConfigEntry=function(){return(e._OrtAddSessionConfigEntry=e.asm.Pa).apply(null,arguments)},e._OrtReleaseSessionOptions=function(){return(e._OrtReleaseSessionOptions=e.asm.Qa).apply(null,arguments)},e._OrtCreateSession=function(){return(e._OrtCreateSession=e.asm.Ra).apply(null,arguments)},e._OrtReleaseSession=function(){return(e._OrtReleaseSession=e.asm.Sa).apply(null,arguments)},e._OrtGetInputCount=function(){return(e._OrtGetInputCount=e.asm.Ta).apply(null,arguments)},e._OrtGetOutputCount=function(){return(e._OrtGetOutputCount=e.asm.Ua).apply(null,arguments)},e._OrtGetInputName=function(){return(e._OrtGetInputName=e.asm.Va).apply(null,arguments)},e._OrtGetOutputName=function(){return(e._OrtGetOutputName=e.asm.Wa).apply(null,arguments)},e._OrtFree=function(){return(e._OrtFree=e.asm.Xa).apply(null,arguments)},e._OrtCreateTensor=function(){return(e._OrtCreateTensor=e.asm.Ya).apply(null,arguments)},e._OrtGetTensorData=function(){return(e._OrtGetTensorData=e.asm.Za).apply(null,arguments)},e._OrtReleaseTensor=function(){return(e._OrtReleaseTensor=e.asm._a).apply(null,arguments)},e._OrtCreateRunOptions=function(){return(e._OrtCreateRunOptions=e.asm.$a).apply(null,arguments)},e._OrtAddRunConfigEntry=function(){return(e._OrtAddRunConfigEntry=e.asm.ab).apply(null,arguments)},e._OrtReleaseRunOptions=function(){return(e._OrtReleaseRunOptions=e.asm.bb).apply(null,arguments)},e._OrtRun=function(){return(e._OrtRun=e.asm.cb).apply(null,arguments)},e._OrtEndProfiling=function(){return(e._OrtEndProfiling=e.asm.db).apply(null,arguments)};var mt,gt=e._malloc=function(){return(gt=e._malloc=e.asm.eb).apply(null,arguments)},vt=e._free=function(){return(vt=e._free=e.asm.fb).apply(null,arguments)},wt=e._fflush=function(){return(wt=e._fflush=e.asm.gb).apply(null,arguments)},_t=e.___funcs_on_exit=function(){return(_t=e.___funcs_on_exit=e.asm.hb).apply(null,arguments)},Ot=e._setThrew=function(){return(Ot=e._setThrew=e.asm.jb).apply(null,arguments)},At=e.stackSave=function(){return(At=e.stackSave=e.asm.kb).apply(null,arguments)},St=e.stackRestore=function(){return(St=e.stackRestore=e.asm.lb).apply(null,arguments)},Tt=e.stackAlloc=function(){return(Tt=e.stackAlloc=e.asm.mb).apply(null,arguments)},Et=e.___cxa_can_catch=function(){return(Et=e.___cxa_can_catch=e.asm.nb).apply(null,arguments)},Mt=e.___cxa_is_pointer_type=function(){return(Mt=e.___cxa_is_pointer_type=e.asm.ob).apply(null,arguments)},Ct=e.dynCall_j=function(){return(Ct=e.dynCall_j=e.asm.pb).apply(null,arguments)},xt=e.dynCall_iiiiij=function(){return(xt=e.dynCall_iiiiij=e.asm.qb).apply(null,arguments)},Rt=e.dynCall_jii=function(){return(Rt=e.dynCall_jii=e.asm.rb).apply(null,arguments)},jt=e.dynCall_viiiiij=function(){return(jt=e.dynCall_viiiiij=e.asm.sb).apply(null,arguments)},kt=e.dynCall_vjji=function(){return(kt=e.dynCall_vjji=e.asm.tb).apply(null,arguments)},Dt=e.dynCall_viiijjjii=function(){return(Dt=e.dynCall_viiijjjii=e.asm.ub).apply(null,arguments)},Pt=e.dynCall_iij=function(){return(Pt=e.dynCall_iij=e.asm.vb).apply(null,arguments)},Ut=e.dynCall_ji=function(){return(Ut=e.dynCall_ji=e.asm.wb).apply(null,arguments)},Ft=e.dynCall_iiiiiij=function(){return(Ft=e.dynCall_iiiiiij=e.asm.xb).apply(null,arguments)},It=e.dynCall_iiij=function(){return(It=e.dynCall_iiij=e.asm.yb).apply(null,arguments)};function Wt(){function t(){if(!mt&&(mt=!0,e.calledRun=!0,!C)){if(Z(I),r(e),e.onRuntimeInitialized&&e.onRuntimeInitialized(),e.postRun)for("function"==typeof e.postRun&&(e.postRun=[e.postRun]);e.postRun.length;){var t=e.postRun.shift();H.unshift(t)}Z(H)}}if(!(0<B)){if(e.preRun)for("function"==typeof e.preRun&&(e.preRun=[e.preRun]);e.preRun.length;)z();Z(F),0<B||(e.setStatus?(e.setStatus("Running..."),setTimeout((function(){setTimeout((function(){e.setStatus("")}),1),t()}),1)):t())}}if(e.UTF8ToString=j,e.stringToUTF8=function(t,e,n){return k(t,T,e,n)},e.lengthBytesUTF8=D,e.stackSave=At,e.stackRestore=St,e.stackAlloc=Tt,N=function t(){mt||Wt(),mt||(N=t)},e.preInit)for("function"==typeof e.preInit&&(e.preInit=[e.preInit]);0<e.preInit.length;)e.preInit.pop()();return Wt(),t.ready});t.exports=r},967:(t,e)=>{"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.iterateExtraOptions=void 0,e.iterateExtraOptions=(t,n,r,a)=>{if("object"==typeof t&&null!==t){if(r.has(t))throw new Error("Circular reference in options");r.add(t)}Object.entries(t).forEach((([t,i])=>{const o=n?n+t:t;if("object"==typeof i)(0,e.iterateExtraOptions)(i,o+".",r,a);else if("string"==typeof i||"number"==typeof i)a(o,i.toString());else{if("boolean"!=typeof i)throw new Error("Can\'t handle extra config type: "+typeof i);a(o,i?"1":"0")}}))}},586:(t,e,n)=>{"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.setRunOptions=void 0;const r=n(967),a=n(983),i=n(361);e.setRunOptions=t=>{const e=(0,i.getInstance)();let n=0;const o=[],u=t||{};try{if(void 0===(null==t?void 0:t.logSeverityLevel))u.logSeverityLevel=2;else if("number"!=typeof t.logSeverityLevel||!Number.isInteger(t.logSeverityLevel)||t.logSeverityLevel<0||t.logSeverityLevel>4)throw new Error(`log serverity level is not valid: ${t.logSeverityLevel}`);if(void 0===(null==t?void 0:t.logVerbosityLevel))u.logVerbosityLevel=0;else if("number"!=typeof t.logVerbosityLevel||!Number.isInteger(t.logVerbosityLevel))throw new Error(`log verbosity level is not valid: ${t.logVerbosityLevel}`);void 0===(null==t?void 0:t.terminate)&&(u.terminate=!1);let i=0;if(void 0!==(null==t?void 0:t.tag)&&(i=(0,a.allocWasmString)(t.tag,o)),n=e._OrtCreateRunOptions(u.logSeverityLevel,u.logVerbosityLevel,!!u.terminate,i),0===n)throw new Error("Can\'t create run options");return void 0!==(null==t?void 0:t.extra)&&(0,r.iterateExtraOptions)(t.extra,"",new WeakSet,((t,r)=>{const i=(0,a.allocWasmString)(t,o),u=(0,a.allocWasmString)(r,o);if(0!==e._OrtAddRunConfigEntry(n,i,u))throw new Error(`Can\'t set a run config entry: ${t} - ${r}`)})),[n,o]}catch(t){throw 0!==n&&e._OrtReleaseRunOptions(n),o.forEach(e._free),t}}},919:(t,e,n)=>{"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.setSessionOptions=void 0;const r=n(967),a=n(983),i=n(361);e.setSessionOptions=t=>{const e=(0,i.getInstance)();let n=0;const o=[],u=t||{};(t=>{t.extra||(t.extra={}),t.extra.session||(t.extra.session={});const e=t.extra.session;e.use_ort_model_bytes_directly||(e.use_ort_model_bytes_directly="1")})(u);try{void 0===(null==t?void 0:t.graphOptimizationLevel)&&(u.graphOptimizationLevel="all");const c=(t=>{switch(t){case"disabled":return 0;case"basic":return 1;case"extended":return 2;case"all":return 99;default:throw new Error(`unsupported graph optimization level: ${t}`)}})(u.graphOptimizationLevel);void 0===(null==t?void 0:t.enableCpuMemArena)&&(u.enableCpuMemArena=!0),void 0===(null==t?void 0:t.enableMemPattern)&&(u.enableMemPattern=!0),void 0===(null==t?void 0:t.executionMode)&&(u.executionMode="sequential");const s=(t=>{switch(t){case"sequential":return 0;case"parallel":return 1;default:throw new Error(`unsupported execution mode: ${t}`)}})(u.executionMode);let l=0;if(void 0!==(null==t?void 0:t.logId)&&(l=(0,a.allocWasmString)(t.logId,o)),void 0===(null==t?void 0:t.logSeverityLevel))u.logSeverityLevel=2;else if("number"!=typeof t.logSeverityLevel||!Number.isInteger(t.logSeverityLevel)||t.logSeverityLevel<0||t.logSeverityLevel>4)throw new Error(`log serverity level is not valid: ${t.logSeverityLevel}`);if(void 0===(null==t?void 0:t.logVerbosityLevel))u.logVerbosityLevel=0;else if("number"!=typeof t.logVerbosityLevel||!Number.isInteger(t.logVerbosityLevel))throw new Error(`log verbosity level is not valid: ${t.logVerbosityLevel}`);if(void 0===(null==t?void 0:t.enableProfiling)&&(u.enableProfiling=!1),n=e._OrtCreateSessionOptions(c,!!u.enableCpuMemArena,!!u.enableMemPattern,s,!!u.enableProfiling,0,l,u.logSeverityLevel,u.logVerbosityLevel),0===n)throw new Error("Can\'t create session options");return(null==t?void 0:t.executionProviders)&&((t,e,n)=>{for(const r of e){let e="string"==typeof r?r:r.name;switch(e){case"xnnpack":e="XNNPACK";break;case"wasm":case"cpu":continue;default:throw new Error(`not supported EP: ${e}`)}const o=(0,a.allocWasmString)(e,n);if(0!==(0,i.getInstance)()._OrtAppendExecutionProvider(t,o))throw new Error(`Can\'t append execution provider: ${e}`)}})(n,t.executionProviders,o),void 0!==(null==t?void 0:t.extra)&&(0,r.iterateExtraOptions)(t.extra,"",new WeakSet,((t,r)=>{const i=(0,a.allocWasmString)(t,o),u=(0,a.allocWasmString)(r,o);if(0!==e._OrtAddSessionConfigEntry(n,i,u))throw new Error(`Can\'t set a session config entry: ${t} - ${r}`)})),[n,o]}catch(t){throw 0!==n&&e._OrtReleaseSessionOptions(n),o.forEach(e._free),t}}},983:(t,e,n)=>{"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.allocWasmString=void 0;const r=n(361);e.allocWasmString=(t,e)=>{const n=(0,r.getInstance)(),a=n.lengthBytesUTF8(t)+1,i=n._malloc(a);return n.stringToUTF8(t,i,a),e.push(i),i}},349:(t,e,n)=>{"use strict";Object.defineProperty(e,"__esModule",{value:!0}),e.extractTransferableBuffers=e.endProfiling=e.run=e.releaseSession=e.createSession=e.createSessionFinalize=e.createSessionAllocate=e.initOrt=void 0;const r=n(586),a=n(919),i=n(983),o=n(361);e.initOrt=(t,e)=>{const n=(0,o.getInstance)()._OrtInit(t,e);if(0!==n)throw new Error(`Can\'t initialize onnxruntime. error code = ${n}`)};const u=new Map;e.createSessionAllocate=t=>{const e=(0,o.getInstance)(),n=e._malloc(t.byteLength);return e.HEAPU8.set(t,n),[n,t.byteLength]},e.createSessionFinalize=(t,e)=>{const n=(0,o.getInstance)();let r=0,i=0,c=[];try{if([i,c]=(0,a.setSessionOptions)(e),r=n._OrtCreateSession(t[0],t[1],i),0===r)throw new Error("Can\'t create a session")}finally{n._free(t[0]),n._OrtReleaseSessionOptions(i),c.forEach(n._free)}const s=n._OrtGetInputCount(r),l=n._OrtGetOutputCount(r),f=[],p=[],h=[],d=[];for(let t=0;t<s;t++){const e=n._OrtGetInputName(r,t);if(0===e)throw new Error("Can\'t get an input name");p.push(e),f.push(n.UTF8ToString(e))}for(let t=0;t<l;t++){const e=n._OrtGetOutputName(r,t);if(0===e)throw new Error("Can\'t get an output name");d.push(e),h.push(n.UTF8ToString(e))}return u.set(r,[r,p,d]),[r,f,h]},e.createSession=(t,n)=>{const r=(0,e.createSessionAllocate)(t);return(0,e.createSessionFinalize)(r,n)},e.releaseSession=t=>{const e=(0,o.getInstance)(),n=u.get(t);if(!n)throw new Error("invalid session id");const r=n[0],a=n[1],i=n[2];a.forEach(e._OrtFree),i.forEach(e._OrtFree),e._OrtReleaseSession(r),u.delete(t)};const c=t=>{switch(t){case"int8":return 3;case"uint8":return 2;case"bool":return 9;case"int16":return 5;case"uint16":return 4;case"int32":return 6;case"uint32":return 12;case"float32":return 1;case"float64":return 11;case"string":return 8;case"int64":return 7;case"uint64":return 13;default:throw new Error(`unsupported data type: ${t}`)}},s=t=>{switch(t){case 3:return"int8";case 2:return"uint8";case 9:return"bool";case 5:return"int16";case 4:return"uint16";case 6:return"int32";case 12:return"uint32";case 1:return"float32";case 11:return"float64";case 8:return"string";case 7:return"int64";case 13:return"uint64";default:throw new Error(`unsupported data type: ${t}`)}},l=t=>{switch(t){case"float32":return Float32Array;case"uint8":case"bool":return Uint8Array;case"int8":return Int8Array;case"uint16":return Uint16Array;case"int16":return Int16Array;case"int32":return Int32Array;case"float64":return Float64Array;case"uint32":return Uint32Array;case"int64":return BigInt64Array;case"uint64":return BigUint64Array;default:throw new Error(`unsupported type: ${t}`)}};e.run=(t,e,n,a,f)=>{const p=(0,o.getInstance)(),h=u.get(t);if(!h)throw new Error("invalid session id");const d=h[0],y=h[1],b=h[2],m=e.length,g=a.length;let v=0,w=[];const _=[],O=[];try{[v,w]=(0,r.setRunOptions)(f);for(let t=0;t<m;t++){const e=n[t][0],r=n[t][1],a=n[t][2];let o,u;if(Array.isArray(a)){u=4*a.length,o=p._malloc(u),O.push(o);let t=o/4;for(let e=0;e<a.length;e++){if("string"!=typeof a[e])throw new TypeError(`tensor data at index ${e} is not a string`);p.HEAPU32[t++]=(0,i.allocWasmString)(a[e],O)}}else u=a.byteLength,o=p._malloc(u),O.push(o),p.HEAPU8.set(new Uint8Array(a.buffer,a.byteOffset,u),o);const s=p.stackSave(),l=p.stackAlloc(4*r.length);try{let t=l/4;r.forEach((e=>p.HEAP32[t++]=e));const n=p._OrtCreateTensor(c(e),o,u,l,r.length);if(0===n)throw new Error("Can\'t create a tensor");_.push(n)}finally{p.stackRestore(s)}}const t=p.stackSave(),o=p.stackAlloc(4*m),u=p.stackAlloc(4*m),h=p.stackAlloc(4*g),A=p.stackAlloc(4*g);try{let n=o/4,r=u/4,i=h/4,c=A/4;for(let t=0;t<m;t++)p.HEAPU32[n++]=_[t],p.HEAPU32[r++]=y[e[t]];for(let t=0;t<g;t++)p.HEAPU32[i++]=0,p.HEAPU32[c++]=b[a[t]];let f=p._OrtRun(d,u,o,m,A,g,h,v);const w=[];if(0===f)for(let t=0;t<g;t++){const e=p.HEAPU32[h/4+t],n=p.stackSave(),r=p.stackAlloc(16);let a,i=0;try{if(f=p._OrtGetTensorData(e,r,r+4,r+8,r+12),0!==f)throw new Error(`Can\'t access output tensor data. error code = ${f}`);let t=r/4;const o=p.HEAPU32[t++];i=p.HEAPU32[t++];const u=p.HEAPU32[t++],c=p.HEAPU32[t++],h=[];for(let t=0;t<c;t++)h.push(p.HEAPU32[u/4+t]);p._OrtFree(u);const d=0===h.length?1:h.reduce(((t,e)=>t*e));if(a=s(o),"string"===a){const t=[];let e=i/4;for(let n=0;n<d;n++){const r=p.HEAPU32[e++],a=n===d-1?void 0:p.HEAPU32[e]-r;t.push(p.UTF8ToString(r,a))}w.push([a,h,t])}else{const t=new(l(a))(d);new Uint8Array(t.buffer,t.byteOffset,t.byteLength).set(p.HEAPU8.subarray(i,i+t.byteLength)),w.push([a,h,t])}}finally{p.stackRestore(n),"string"===a&&i&&p._free(i),p._OrtReleaseTensor(e)}}if(0===f)return w;throw new Error(`failed to call OrtRun(). error code = ${f}.`)}finally{p.stackRestore(t)}}finally{_.forEach(p._OrtReleaseTensor),O.forEach(p._free),p._OrtReleaseRunOptions(v),w.forEach(p._free)}},e.endProfiling=t=>{const e=(0,o.getInstance)(),n=u.get(t);if(!n)throw new Error("invalid session id");const r=n[0],a=e._OrtEndProfiling(r);if(0===a)throw new Error("Can\'t get an profile file name");e._OrtFree(a)},e.extractTransferableBuffers=t=>{const e=[];for(const n of t){const t=n[2];!Array.isArray(t)&&t.buffer&&e.push(t.buffer)}return e}},361:function(t,e,n){"use strict";var r=this&&this.__createBinding||(Object.create?function(t,e,n,r){void 0===r&&(r=n);var a=Object.getOwnPropertyDescriptor(e,n);a&&!("get"in a?!e.__esModule:a.writable||a.configurable)||(a={enumerable:!0,get:function(){return e[n]}}),Object.defineProperty(t,r,a)}:function(t,e,n,r){void 0===r&&(r=n),t[r]=e[n]}),a=this&&this.__setModuleDefault||(Object.create?function(t,e){Object.defineProperty(t,"default",{enumerable:!0,value:e})}:function(t,e){t.default=e}),i=this&&this.__importStar||function(t){if(t&&t.__esModule)return t;var e={};if(null!=t)for(var n in t)"default"!==n&&Object.prototype.hasOwnProperty.call(t,n)&&r(e,t,n);return a(e,t),e},o=this&&this.__importDefault||function(t){return t&&t.__esModule?t:{default:t}};Object.defineProperty(e,"__esModule",{value:!0}),e.dispose=e.getInstance=e.initializeWebAssembly=void 0;const u=i(n(449)),c=o(n(932)),s=n(474);let l,f=!1,p=!1,h=!1;const d=(t,e)=>e?t?"ort-wasm-simd-threaded.wasm":"ort-wasm-threaded.wasm":t?"ort-wasm-simd.wasm":"ort-wasm.wasm";e.initializeWebAssembly=async t=>{if(f)return Promise.resolve();if(p)throw new Error("multiple calls to \'initializeWebAssembly()\' detected.");if(h)throw new Error("previous call to \'initializeWebAssembly()\' failed.");p=!0;const e=t.initTimeout,r=t.numThreads,a=t.simd,i=r>1&&(()=>{try{return"undefined"!=typeof SharedArrayBuffer&&("undefined"!=typeof MessageChannel&&(new MessageChannel).port1.postMessage(new SharedArrayBuffer(1)),WebAssembly.validate(new Uint8Array([0,97,115,109,1,0,0,0,1,4,1,96,0,0,3,2,1,0,5,4,1,3,1,1,10,11,1,9,0,65,0,254,16,2,0,26,11])))}catch(t){return!1}})(),o=a&&(()=>{try{return WebAssembly.validate(new Uint8Array([0,97,115,109,1,0,0,0,1,4,1,96,0,0,3,2,1,0,10,30,1,28,0,65,0,253,15,253,12,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,253,186,1,26,11]))}catch(t){return!1}})(),y="string"==typeof t.wasmPaths?t.wasmPaths:void 0,b=d(!1,i),m=d(o,i),g="object"==typeof t.wasmPaths?t.wasmPaths[m]:void 0;let v=!1;const w=[];if(e>0&&w.push(new Promise((t=>{setTimeout((()=>{v=!0,t()}),e)}))),w.push(new Promise(((t,e)=>{const r=i?s:c.default,a={locateFile:(t,e)=>i&&t.endsWith(".worker.js")&&"undefined"!=typeof Blob?URL.createObjectURL(new Blob([n(154)],{type:"text/javascript"})):t===b?null!=g?g:(null!=y?y:e)+m:e+t};if(i)if("undefined"==typeof Blob)a.mainScriptUrlOrBlob=u.join("/","ort-wasm-threaded.js");else{const t=`var ortWasmThreaded=(function(){var _scriptDir;return ${r.toString()}})();`;a.mainScriptUrlOrBlob=new Blob([t],{type:"text/javascript"})}r(a).then((e=>{p=!1,f=!0,l=e,t()}),(t=>{p=!1,h=!0,e(t)}))}))),await Promise.race(w),v)throw new Error(`WebAssembly backend initializing failed due to timeout: ${e}ms`)},e.getInstance=()=>{if(f&&l)return l;throw new Error("WebAssembly is not initialized yet.")},e.dispose=()=>{var t;!f||p||h||(p=!0,null===(t=l.PThread)||void 0===t||t.terminateAllThreads(),l=void 0,p=!1,f=!1,h=!0)}},154:t=>{"use strict";t.exports=\'"use strict";var e={},t="object"==typeof process&&"object"==typeof process.versions&&"string"==typeof process.versions.node;if(t){var r=require("worker_threads"),a=r.parentPort;a.on("message",(e=>onmessage({data:e})));var o=require("fs");Object.assign(global,{self:global,require:require,Module:e,location:{href:__filename},Worker:r.Worker,importScripts:function(e){(0,eval)(o.readFileSync(e,"utf8"))},postMessage:function(e){a.postMessage(e)},performance:global.performance||{now:function(){return Date.now()}}})}var s=!1,n=[],i=function(){var e=Array.prototype.slice.call(arguments).join(" ");t?o.writeSync(2,e+"\\\\n"):console.error(e)};self.alert=function(){var t=Array.prototype.slice.call(arguments).join(" ");postMessage({cmd:"alert",text:t,threadId:e._pthread_self()})},e.instantiateWasm=(t,r)=>{var a=new WebAssembly.Instance(e.wasmModule,t);return r(a),e.wasmModule=null,a.exports},self.onunhandledrejection=e=>{throw e.reason??e},self.onmessage=t=>{try{if("load"===t.data.cmd){if(e.wasmModule=t.data.wasmModule,e.wasmMemory=t.data.wasmMemory,e.buffer=e.wasmMemory.buffer,e.ENVIRONMENT_IS_PTHREAD=!0,"string"==typeof t.data.urlOrBlob)importScripts(t.data.urlOrBlob);else{var r=URL.createObjectURL(t.data.urlOrBlob);importScripts(r),URL.revokeObjectURL(r)}ortWasmThreaded(e).then((function(t){e=t}))}else if("run"===t.data.cmd){e.__performance_now_clock_drift=performance.now()-t.data.time,e.__emscripten_thread_init(t.data.pthread_ptr,0,0,1),e.establishStackSpace(),e.PThread.receiveObjectTransfer(t.data),e.PThread.threadInitTLS(),s||(n.forEach((t=>{e.executeNotifiedProxyingQueue(t)})),n=[],s=!0);try{e.invokeEntryPoint(t.data.start_routine,t.data.arg)}catch(t){if("unwind"!=t){if(!(t instanceof e.ExitStatus))throw t;e.keepRuntimeAlive()||e.__emscripten_thread_exit(t.status)}}}else"cancel"===t.data.cmd?e._pthread_self()&&e.__emscripten_thread_exit(-1):"setimmediate"===t.data.target||("processProxyingQueue"===t.data.cmd?s?e.executeNotifiedProxyingQueue(t.data.queue):n.push(t.data.queue):(i("worker.js received unknown command "+t.data.cmd),i(t.data)))}catch(t){throw i("worker.js onmessage() captured an uncaught exception: "+t),t&&t.stack&&i(t.stack),e.__emscripten_thread_crashed&&e.__emscripten_thread_crashed(),t}};\\n\'},384:()=>{},993:()=>{},908:()=>{},953:()=>{},925:()=>{},449:()=>{}},e={};function n(r){var a=e[r];if(void 0!==a)return a.exports;var i=e[r]={exports:{}};return t[r].call(i.exports,i,i.exports,n),i.exports}n.g=function(){if("object"==typeof globalThis)return globalThis;try{return this||new Function("return this")()}catch(t){if("object"==typeof window)return window}}(),(()=>{"use strict";const t=n(349),e=n(361);self.onmessage=n=>{switch(n.data.type){case"init-wasm":(0,e.initializeWebAssembly)(n.data.in).then((()=>postMessage({type:"init-wasm"})),(t=>postMessage({type:"init-wasm",err:t})));break;case"init-ort":try{const{numThreads:e,loggingLevel:r}=n.data.in;(0,t.initOrt)(e,r),postMessage({type:"init-ort"})}catch(t){postMessage({type:"init-ort",err:t})}break;case"create_allocate":try{const{model:e}=n.data.in,r=(0,t.createSessionAllocate)(e);postMessage({type:"create_allocate",out:r})}catch(t){postMessage({type:"create_allocate",err:t})}break;case"create_finalize":try{const{modeldata:e,options:r}=n.data.in,a=(0,t.createSessionFinalize)(e,r);postMessage({type:"create_finalize",out:a})}catch(t){postMessage({type:"create_finalize",err:t})}break;case"create":try{const{model:e,options:r}=n.data.in,a=(0,t.createSession)(e,r);postMessage({type:"create",out:a})}catch(t){postMessage({type:"create",err:t})}break;case"release":try{const e=n.data.in;(0,t.releaseSession)(e),postMessage({type:"release"})}catch(t){postMessage({type:"release",err:t})}break;case"run":try{const{sessionId:e,inputIndices:r,inputs:a,outputIndices:i,options:o}=n.data.in,u=(0,t.run)(e,r,a,i,o);postMessage({type:"run",out:u},(0,t.extractTransferableBuffers)(u))}catch(t){postMessage({type:"run",err:t})}break;case"end-profiling":try{const e=n.data.in;(0,t.endProfiling)(e),postMessage({type:"end-profiling"})}catch(t){postMessage({type:"end-profiling",err:t})}}}})()})();\n',"Worker",void 0,void 0)}},477:y=>{y.exports=function(n,a,u,c){var p=self||window;try{try{var s;try{s=new p.Blob([n])}catch{(s=new(p.BlobBuilder||p.WebKitBlobBuilder||p.MozBlobBuilder||p.MSBlobBuilder)).append(n),s=s.getBlob()}var h=p.URL||p.webkitURL,f=h.createObjectURL(s),l=new p[a](f,u);return h.revokeObjectURL(f),l}catch{return new p[a]("data:application/javascript,".concat(encodeURIComponent(n)),u)}}catch{if(!c)throw Error("Inline worker is not supported");return new p[a](c,u)}}},4154:y=>{y.exports=`"use strict";var e={},t="object"==typeof process&&"object"==typeof process.versions&&"string"==typeof process.versions.node;if(t){var r=require("worker_threads"),a=r.parentPort;a.on("message",(e=>onmessage({data:e})));var o=require("fs");Object.assign(global,{self:global,require:require,Module:e,location:{href:__filename},Worker:r.Worker,importScripts:function(e){(0,eval)(o.readFileSync(e,"utf8"))},postMessage:function(e){a.postMessage(e)},performance:global.performance||{now:function(){return Date.now()}}})}var s=!1,n=[],i=function(){var e=Array.prototype.slice.call(arguments).join(" ");t?o.writeSync(2,e+"\\n"):console.error(e)};self.alert=function(){var t=Array.prototype.slice.call(arguments).join(" ");postMessage({cmd:"alert",text:t,threadId:e._pthread_self()})},e.instantiateWasm=(t,r)=>{var a=new WebAssembly.Instance(e.wasmModule,t);return r(a),e.wasmModule=null,a.exports},self.onunhandledrejection=e=>{throw e.reason??e},self.onmessage=t=>{try{if("load"===t.data.cmd){if(e.wasmModule=t.data.wasmModule,e.wasmMemory=t.data.wasmMemory,e.buffer=e.wasmMemory.buffer,e.ENVIRONMENT_IS_PTHREAD=!0,"string"==typeof t.data.urlOrBlob)importScripts(t.data.urlOrBlob);else{var r=URL.createObjectURL(t.data.urlOrBlob);importScripts(r),URL.revokeObjectURL(r)}ortWasmThreaded(e).then((function(t){e=t}))}else if("run"===t.data.cmd){e.__performance_now_clock_drift=performance.now()-t.data.time,e.__emscripten_thread_init(t.data.pthread_ptr,0,0,1),e.establishStackSpace(),e.PThread.receiveObjectTransfer(t.data),e.PThread.threadInitTLS(),s||(n.forEach((t=>{e.executeNotifiedProxyingQueue(t)})),n=[],s=!0);try{e.invokeEntryPoint(t.data.start_routine,t.data.arg)}catch(t){if("unwind"!=t){if(!(t instanceof e.ExitStatus))throw t;e.keepRuntimeAlive()||e.__emscripten_thread_exit(t.status)}}}else"cancel"===t.data.cmd?e._pthread_self()&&e.__emscripten_thread_exit(-1):"setimmediate"===t.data.target||("processProxyingQueue"===t.data.cmd?s?e.executeNotifiedProxyingQueue(t.data.queue):n.push(t.data.queue):(i("worker.js received unknown command "+t.data.cmd),i(t.data)))}catch(t){throw i("worker.js onmessage() captured an uncaught exception: "+t),t&&t.stack&&i(t.stack),e.__emscripten_thread_crashed&&e.__emscripten_thread_crashed(),t}}; -`},1670:y=>{y.exports=__WEBPACK_EXTERNAL_MODULE__1670__},7067:()=>{},1296:()=>{},1384:()=>{},3993:()=>{},908:()=>{},6953:()=>{},9925:()=>{},2806:()=>{},6449:()=>{},2850:()=>{},5381:()=>{},5686:(y,n,a)=>{a.r(n),a.d(n,{flatbuffers:()=>u});var u={};u.Offset,u.Table,u.SIZEOF_SHORT=2,u.SIZEOF_INT=4,u.FILE_IDENTIFIER_LENGTH=4,u.SIZE_PREFIX_LENGTH=4,u.Encoding={UTF8_BYTES:1,UTF16_STRING:2},u.int32=new Int32Array(2),u.float32=new Float32Array(u.int32.buffer),u.float64=new Float64Array(u.int32.buffer),u.isLittleEndian=new Uint16Array(new Uint8Array([1,0]).buffer)[0]===1,u.Long=function(c,p){this.low=0|c,this.high=0|p},u.Long.create=function(c,p){return c==0&&p==0?u.Long.ZERO:new u.Long(c,p)},u.Long.prototype.toFloat64=function(){return(this.low>>>0)+4294967296*this.high},u.Long.prototype.equals=function(c){return this.low==c.low&&this.high==c.high},u.Long.ZERO=new u.Long(0,0),u.Builder=function(c){if(c)p=c;else var p=1024;this.bb=u.ByteBuffer.allocate(p),this.space=p,this.minalign=1,this.vtable=null,this.vtable_in_use=0,this.isNested=!1,this.object_start=0,this.vtables=[],this.vector_num_elems=0,this.force_defaults=!1},u.Builder.prototype.clear=function(){this.bb.clear(),this.space=this.bb.capacity(),this.minalign=1,this.vtable=null,this.vtable_in_use=0,this.isNested=!1,this.object_start=0,this.vtables=[],this.vector_num_elems=0,this.force_defaults=!1},u.Builder.prototype.forceDefaults=function(c){this.force_defaults=c},u.Builder.prototype.dataBuffer=function(){return this.bb},u.Builder.prototype.asUint8Array=function(){return this.bb.bytes().subarray(this.bb.position(),this.bb.position()+this.offset())},u.Builder.prototype.prep=function(c,p){c>this.minalign&&(this.minalign=c);for(var s=1+~(this.bb.capacity()-this.space+p)&c-1;this.space<s+c+p;){var h=this.bb.capacity();this.bb=u.Builder.growByteBuffer(this.bb),this.space+=this.bb.capacity()-h}this.pad(s)},u.Builder.prototype.pad=function(c){for(var p=0;p<c;p++)this.bb.writeInt8(--this.space,0)},u.Builder.prototype.writeInt8=function(c){this.bb.writeInt8(this.space-=1,c)},u.Builder.prototype.writeInt16=function(c){this.bb.writeInt16(this.space-=2,c)},u.Builder.prototype.writeInt32=function(c){this.bb.writeInt32(this.space-=4,c)},u.Builder.prototype.writeInt64=function(c){this.bb.writeInt64(this.space-=8,c)},u.Builder.prototype.writeFloat32=function(c){this.bb.writeFloat32(this.space-=4,c)},u.Builder.prototype.writeFloat64=function(c){this.bb.writeFloat64(this.space-=8,c)},u.Builder.prototype.addInt8=function(c){this.prep(1,0),this.writeInt8(c)},u.Builder.prototype.addInt16=function(c){this.prep(2,0),this.writeInt16(c)},u.Builder.prototype.addInt32=function(c){this.prep(4,0),this.writeInt32(c)},u.Builder.prototype.addInt64=function(c){this.prep(8,0),this.writeInt64(c)},u.Builder.prototype.addFloat32=function(c){this.prep(4,0),this.writeFloat32(c)},u.Builder.prototype.addFloat64=function(c){this.prep(8,0),this.writeFloat64(c)},u.Builder.prototype.addFieldInt8=function(c,p,s){(this.force_defaults||p!=s)&&(this.addInt8(p),this.slot(c))},u.Builder.prototype.addFieldInt16=function(c,p,s){(this.force_defaults||p!=s)&&(this.addInt16(p),this.slot(c))},u.Builder.prototype.addFieldInt32=function(c,p,s){(this.force_defaults||p!=s)&&(this.addInt32(p),this.slot(c))},u.Builder.prototype.addFieldInt64=function(c,p,s){!this.force_defaults&&p.equals(s)||(this.addInt64(p),this.slot(c))},u.Builder.prototype.addFieldFloat32=function(c,p,s){(this.force_defaults||p!=s)&&(this.addFloat32(p),this.slot(c))},u.Builder.prototype.addFieldFloat64=function(c,p,s){(this.force_defaults||p!=s)&&(this.addFloat64(p),this.slot(c))},u.Builder.prototype.addFieldOffset=function(c,p,s){(this.force_defaults||p!=s)&&(this.addOffset(p),this.slot(c))},u.Builder.prototype.addFieldStruct=function(c,p,s){p!=s&&(this.nested(p),this.slot(c))},u.Builder.prototype.nested=function(c){if(c!=this.offset())throw new Error("FlatBuffers: struct must be serialized inline.")},u.Builder.prototype.notNested=function(){if(this.isNested)throw new Error("FlatBuffers: object serialization must not be nested.")},u.Builder.prototype.slot=function(c){this.vtable[c]=this.offset()},u.Builder.prototype.offset=function(){return this.bb.capacity()-this.space},u.Builder.growByteBuffer=function(c){var p=c.capacity();if(3221225472&p)throw new Error("FlatBuffers: cannot grow buffer beyond 2 gigabytes.");var s=p<<1,h=u.ByteBuffer.allocate(s);return h.setPosition(s-p),h.bytes().set(c.bytes(),s-p),h},u.Builder.prototype.addOffset=function(c){this.prep(u.SIZEOF_INT,0),this.writeInt32(this.offset()-c+u.SIZEOF_INT)},u.Builder.prototype.startObject=function(c){this.notNested(),this.vtable==null&&(this.vtable=[]),this.vtable_in_use=c;for(var p=0;p<c;p++)this.vtable[p]=0;this.isNested=!0,this.object_start=this.offset()},u.Builder.prototype.endObject=function(){if(this.vtable==null||!this.isNested)throw new Error("FlatBuffers: endObject called without startObject");this.addInt32(0);for(var c=this.offset(),p=this.vtable_in_use-1;p>=0&&this.vtable[p]==0;p--);for(var s=p+1;p>=0;p--)this.addInt16(this.vtable[p]!=0?c-this.vtable[p]:0);this.addInt16(c-this.object_start);var h=(s+2)*u.SIZEOF_SHORT;this.addInt16(h);var f=0,l=this.space;e:for(p=0;p<this.vtables.length;p++){var o=this.bb.capacity()-this.vtables[p];if(h==this.bb.readInt16(o)){for(var t=u.SIZEOF_SHORT;t<h;t+=u.SIZEOF_SHORT)if(this.bb.readInt16(l+t)!=this.bb.readInt16(o+t))continue e;f=this.vtables[p];break}}return f?(this.space=this.bb.capacity()-c,this.bb.writeInt32(this.space,f-c)):(this.vtables.push(this.offset()),this.bb.writeInt32(this.bb.capacity()-c,this.offset()-c)),this.isNested=!1,c},u.Builder.prototype.finish=function(c,p,s){var h=s?u.SIZE_PREFIX_LENGTH:0;if(p){var f=p;if(this.prep(this.minalign,u.SIZEOF_INT+u.FILE_IDENTIFIER_LENGTH+h),f.length!=u.FILE_IDENTIFIER_LENGTH)throw new Error("FlatBuffers: file identifier must be length "+u.FILE_IDENTIFIER_LENGTH);for(var l=u.FILE_IDENTIFIER_LENGTH-1;l>=0;l--)this.writeInt8(f.charCodeAt(l))}this.prep(this.minalign,u.SIZEOF_INT+h),this.addOffset(c),h&&this.addInt32(this.bb.capacity()-this.space),this.bb.setPosition(this.space)},u.Builder.prototype.finishSizePrefixed=function(c,p){this.finish(c,p,!0)},u.Builder.prototype.requiredField=function(c,p){var s=this.bb.capacity()-c,h=s-this.bb.readInt32(s);if(this.bb.readInt16(h+p)==0)throw new Error("FlatBuffers: field "+p+" must be set")},u.Builder.prototype.startVector=function(c,p,s){this.notNested(),this.vector_num_elems=p,this.prep(u.SIZEOF_INT,c*p),this.prep(s,c*p)},u.Builder.prototype.endVector=function(){return this.writeInt32(this.vector_num_elems),this.offset()},u.Builder.prototype.createString=function(c){if(c instanceof Uint8Array)var p=c;else{p=[];for(var s=0;s<c.length;){var h,f=c.charCodeAt(s++);(h=f<55296||f>=56320?f:(f<<10)+c.charCodeAt(s++)+-56613888)<128?p.push(h):(h<2048?p.push(h>>6&31|192):(h<65536?p.push(h>>12&15|224):p.push(h>>18&7|240,h>>12&63|128),p.push(h>>6&63|128)),p.push(63&h|128))}}this.addInt8(0),this.startVector(1,p.length,1),this.bb.setPosition(this.space-=p.length),s=0;for(var l=this.space,o=this.bb.bytes();s<p.length;s++)o[l++]=p[s];return this.endVector()},u.Builder.prototype.createLong=function(c,p){return u.Long.create(c,p)},u.ByteBuffer=function(c){this.bytes_=c,this.position_=0},u.ByteBuffer.allocate=function(c){return new u.ByteBuffer(new Uint8Array(c))},u.ByteBuffer.prototype.clear=function(){this.position_=0},u.ByteBuffer.prototype.bytes=function(){return this.bytes_},u.ByteBuffer.prototype.position=function(){return this.position_},u.ByteBuffer.prototype.setPosition=function(c){this.position_=c},u.ByteBuffer.prototype.capacity=function(){return this.bytes_.length},u.ByteBuffer.prototype.readInt8=function(c){return this.readUint8(c)<<24>>24},u.ByteBuffer.prototype.readUint8=function(c){return this.bytes_[c]},u.ByteBuffer.prototype.readInt16=function(c){return this.readUint16(c)<<16>>16},u.ByteBuffer.prototype.readUint16=function(c){return this.bytes_[c]|this.bytes_[c+1]<<8},u.ByteBuffer.prototype.readInt32=function(c){return this.bytes_[c]|this.bytes_[c+1]<<8|this.bytes_[c+2]<<16|this.bytes_[c+3]<<24},u.ByteBuffer.prototype.readUint32=function(c){return this.readInt32(c)>>>0},u.ByteBuffer.prototype.readInt64=function(c){return new u.Long(this.readInt32(c),this.readInt32(c+4))},u.ByteBuffer.prototype.readUint64=function(c){return new u.Long(this.readUint32(c),this.readUint32(c+4))},u.ByteBuffer.prototype.readFloat32=function(c){return u.int32[0]=this.readInt32(c),u.float32[0]},u.ByteBuffer.prototype.readFloat64=function(c){return u.int32[u.isLittleEndian?0:1]=this.readInt32(c),u.int32[u.isLittleEndian?1:0]=this.readInt32(c+4),u.float64[0]},u.ByteBuffer.prototype.writeInt8=function(c,p){this.bytes_[c]=p},u.ByteBuffer.prototype.writeUint8=function(c,p){this.bytes_[c]=p},u.ByteBuffer.prototype.writeInt16=function(c,p){this.bytes_[c]=p,this.bytes_[c+1]=p>>8},u.ByteBuffer.prototype.writeUint16=function(c,p){this.bytes_[c]=p,this.bytes_[c+1]=p>>8},u.ByteBuffer.prototype.writeInt32=function(c,p){this.bytes_[c]=p,this.bytes_[c+1]=p>>8,this.bytes_[c+2]=p>>16,this.bytes_[c+3]=p>>24},u.ByteBuffer.prototype.writeUint32=function(c,p){this.bytes_[c]=p,this.bytes_[c+1]=p>>8,this.bytes_[c+2]=p>>16,this.bytes_[c+3]=p>>24},u.ByteBuffer.prototype.writeInt64=function(c,p){this.writeInt32(c,p.low),this.writeInt32(c+4,p.high)},u.ByteBuffer.prototype.writeUint64=function(c,p){this.writeUint32(c,p.low),this.writeUint32(c+4,p.high)},u.ByteBuffer.prototype.writeFloat32=function(c,p){u.float32[0]=p,this.writeInt32(c,u.int32[0])},u.ByteBuffer.prototype.writeFloat64=function(c,p){u.float64[0]=p,this.writeInt32(c,u.int32[u.isLittleEndian?0:1]),this.writeInt32(c+4,u.int32[u.isLittleEndian?1:0])},u.ByteBuffer.prototype.getBufferIdentifier=function(){if(this.bytes_.length<this.position_+u.SIZEOF_INT+u.FILE_IDENTIFIER_LENGTH)throw new Error("FlatBuffers: ByteBuffer is too short to contain an identifier.");for(var c="",p=0;p<u.FILE_IDENTIFIER_LENGTH;p++)c+=String.fromCharCode(this.readInt8(this.position_+u.SIZEOF_INT+p));return c},u.ByteBuffer.prototype.__offset=function(c,p){var s=c-this.readInt32(c);return p<this.readInt16(s)?this.readInt16(s+p):0},u.ByteBuffer.prototype.__union=function(c,p){return c.bb_pos=p+this.readInt32(p),c.bb=this,c},u.ByteBuffer.prototype.__string=function(c,p){c+=this.readInt32(c);var s=this.readInt32(c),h="",f=0;if(c+=u.SIZEOF_INT,p===u.Encoding.UTF8_BYTES)return this.bytes_.subarray(c,c+s);for(;f<s;){var l,o=this.readUint8(c+f++);if(o<192)l=o;else{var t=this.readUint8(c+f++);if(o<224)l=(31&o)<<6|63&t;else{var e=this.readUint8(c+f++);l=o<240?(15&o)<<12|(63&t)<<6|63&e:(7&o)<<18|(63&t)<<12|(63&e)<<6|63&this.readUint8(c+f++)}}l<65536?h+=String.fromCharCode(l):(l-=65536,h+=String.fromCharCode(55296+(l>>10),56320+(1023&l)))}return h},u.ByteBuffer.prototype.__indirect=function(c){return c+this.readInt32(c)},u.ByteBuffer.prototype.__vector=function(c){return c+this.readInt32(c)+u.SIZEOF_INT},u.ByteBuffer.prototype.__vector_len=function(c){return this.readInt32(c+this.readInt32(c))},u.ByteBuffer.prototype.__has_identifier=function(c){if(c.length!=u.FILE_IDENTIFIER_LENGTH)throw new Error("FlatBuffers: file identifier must be length "+u.FILE_IDENTIFIER_LENGTH);for(var p=0;p<u.FILE_IDENTIFIER_LENGTH;p++)if(c.charCodeAt(p)!=this.readInt8(this.position_+u.SIZEOF_INT+p))return!1;return!0},u.ByteBuffer.prototype.createLong=function(c,p){return u.Long.create(c,p)}}},__webpack_module_cache__={};function __webpack_require__(y){var n=__webpack_module_cache__[y];if(n!==void 0)return n.exports;var a=__webpack_module_cache__[y]={exports:{}};return __webpack_modules__[y].call(a.exports,a,a.exports,__webpack_require__),a.exports}__webpack_require__.n=y=>{var n=y&&y.__esModule?()=>y.default:()=>y;return __webpack_require__.d(n,{a:n}),n},__webpack_require__.d=(y,n)=>{for(var a in n)__webpack_require__.o(n,a)&&!__webpack_require__.o(y,a)&&Object.defineProperty(y,a,{enumerable:!0,get:n[a]})},__webpack_require__.g=function(){if(typeof globalThis=="object")return globalThis;try{return this||new Function("return this")()}catch{if(typeof window=="object")return window}}(),__webpack_require__.o=(y,n)=>Object.prototype.hasOwnProperty.call(y,n),__webpack_require__.r=y=>{typeof Symbol<"u"&&Symbol.toStringTag&&Object.defineProperty(y,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(y,"__esModule",{value:!0})};var __webpack_exports__=__webpack_require__(6018);return __webpack_exports__})())})(ortWeb_min$1);var ortWeb_minExports=ortWeb_min$1.exports,ortWeb_min=getDefaultExportFromCjs(ortWeb_minExports),ONNX_WEB=_mergeNamespaces({__proto__:null,default:ortWeb_min},[ortWeb_minExports]);let ONNX;const executionProviders=["wasm"];typeof process<"u"&&((nt=process==null?void 0:process.release)==null?void 0:nt.name)==="node"?(ONNX=fs??ONNX_NODE,executionProviders.unshift("cpu")):(ONNX=ortWeb_min??ONNX_WEB,typeof navigator<"u"&&/iP(hone|od|ad).+16_4.+AppleWebKit/.test(navigator.userAgent)&&(ONNX.env.wasm.simd=!1));const{env:onnx_env}=ONNX,VERSION="2.3.1",WEB_CACHE_AVAILABLE=typeof self<"u"&&"caches"in self,FS_AVAILABLE=!isEmpty(fs),PATH_AVAILABLE=!isEmpty(fs),RUNNING_LOCALLY=FS_AVAILABLE&&PATH_AVAILABLE,__dirname=RUNNING_LOCALLY?fs.dirname(fs.dirname(fs.fileURLToPath(self.location.href))):"./",DEFAULT_CACHE_DIR=RUNNING_LOCALLY?fs.join(__dirname,"/.cache/"):null,DEFAULT_LOCAL_MODEL_PATH="/models/",localModelPath=RUNNING_LOCALLY?fs.join(__dirname,DEFAULT_LOCAL_MODEL_PATH):DEFAULT_LOCAL_MODEL_PATH;onnx_env.wasm.wasmPaths=RUNNING_LOCALLY?fs.join(__dirname,"/dist/"):`https://cdn.jsdelivr.net/npm/@xenova/transformers@${VERSION}/dist/`;const env={backends:{onnx:onnx_env,tfjs:{}},__dirname,version:VERSION,allowRemoteModels:!0,remoteHost:"https://huggingface.co/",remotePathTemplate:"{model}/resolve/{revision}/",allowLocalModels:!0,localModelPath,useFS:FS_AVAILABLE,useBrowserCache:WEB_CACHE_AVAILABLE,useFSCache:FS_AVAILABLE,cacheDir:DEFAULT_CACHE_DIR};function isEmpty(y){return Object.keys(y).length===0}globalThis.ReadableStream||(globalThis.ReadableStream=fs.ReadableStream);class Headers extends Object{constructor(...n){super(),Object.assign(this,n)}get(n){return this[n]}clone(){return new Headers(this)}}class FileResponse{constructor(n){Se(this,"_CONTENT_TYPE_MAP",{txt:"text/plain",html:"text/html",css:"text/css",js:"text/javascript",json:"application/json",png:"image/png",jpg:"image/jpeg",jpeg:"image/jpeg",gif:"image/gif"});if(this.filePath=n,this.headers=new Headers,this.exists=fs.existsSync(n),this.exists){this.status=200,this.statusText="OK";let a=fs.statSync(n);this.headers["content-length"]=a.size,this.updateContentType();let u=this;this.body=new ReadableStream({start(c){u.arrayBuffer().then(p=>{c.enqueue(new Uint8Array(p)),c.close()})}})}else this.status=404,this.statusText="Not Found",this.body=null}updateContentType(){const n=this.filePath.toString().split(".").pop().toLowerCase();this.headers["content-type"]=this._CONTENT_TYPE_MAP[n]??"application/octet-stream"}clone(){let n=new FileResponse(this.filePath);return n.exists=this.exists,n.status=this.status,n.statusText=this.statusText,n.headers=this.headers.clone(),n}async arrayBuffer(){return(await fs.promises.readFile(this.filePath)).buffer}async blob(){const n=await fs.promises.readFile(this.filePath);return new Blob([n],{type:this.headers["content-type"]})}async text(){return await fs.promises.readFile(this.filePath,"utf8")}async json(){return JSON.parse(await this.text())}}function isValidHttpUrl(y){let n;try{n=new URL(y)}catch{return!1}return n.protocol==="http:"||n.protocol==="https:"}async function getFile(y){var n,a;if(env.useFS&&!isValidHttpUrl(y))return new FileResponse(y);if(typeof process<"u"&&((n=process==null?void 0:process.release)==null?void 0:n.name)==="node"){const u=!!((a=process.env)!=null&&a.TESTING_REMOTELY),c=env.version;return fetch(y,{headers:{"User-Agent":`transformers.js/${c}; is_ci/${u};`}})}else return fetch(y)}const ERROR_MAPPING={400:"Bad request error occurred while trying to load file",401:"Unauthorized access to file",403:"Forbidden access to file",404:"Could not locate file",408:"Request timeout error occurred while trying to load file",500:"Internal server error error occurred while trying to load file",502:"Bad gateway error occurred while trying to load file",503:"Service unavailable error occurred while trying to load file",504:"Gateway timeout error occurred while trying to load file"};function handleError(y,n,a){if(!a)return null;const u=ERROR_MAPPING[y]??`Error (${y}) occurred while trying to load file`;throw Error(`${u}: "${n}".`)}class FileCache{constructor(n){this.path=n}async match(n){let a=fs.join(this.path,n),u=new FileResponse(a);if(u.exists)return u}async put(n,a){const u=Buffer.from(await a.arrayBuffer());let c=fs.join(this.path,n);try{await fs.promises.mkdir(fs.dirname(c),{recursive:!0}),await fs.promises.writeFile(c,u)}catch(p){console.warn("An error occurred while writing the file to cache:",p)}}}async function tryCache(y,...n){for(let a of n)try{let u=await y.match(a);if(u)return u}catch{continue}}async function getModelFile(y,n,a=!0,u={}){if(!env.allowLocalModels&&u.local_files_only)throw Error("Invalid configuration detected: local models are disabled (`env.allowLocalModels=false`) but you have requested to only use local models (`local_files_only=true`).");dispatchCallback(u.progress_callback,{status:"initiate",name:y,file:n});let c;if(!c&&env.useBrowserCache){if(typeof caches>"u")throw Error("Browser cache is not available in this environment.");try{c=await caches.open("transformers-cache")}catch(d){console.warn("An error occurred while opening the browser cache:",d)}}!c&&env.useFSCache&&(c=new FileCache(u.cache_dir??env.cacheDir));const p=u.revision??"main";let s=pathJoin(y,n),h=pathJoin(env.localModelPath,s),f=pathJoin(env.remoteHost,env.remotePathTemplate.replaceAll("{model}",y).replaceAll("{revision}",p),n),l=p==="main"?s:pathJoin(y,p,n),o,t=c instanceof FileCache?l:f,e,r;if(c&&(r=await tryCache(c,h,t)),r===void 0){let d=isValidHttpUrl(s);if(env.allowLocalModels)if(d){if(u.local_files_only)throw new Error(`\`local_files_only=true\`, but attempted to load a remote file from: ${s}.`)}else try{r=await getFile(h),o=h}catch(g){console.warn(`Unable to load from local path "${h}": "${g}"`)}if(r===void 0||r.status===404){if(u.local_files_only||!env.allowRemoteModels){if(a)throw Error(`\`local_files_only=true\` or \`env.allowRemoteModels=false\` and file was not found locally at "${h}".`);return null}if(r=await getFile(f),r.status!==200)return handleError(r.status,f,a);o=t}c&&r instanceof Response&&r.status===200&&(e=r.clone())}dispatchCallback(u.progress_callback,{status:"download",name:y,file:n});const i=await readResponse(r,d=>{dispatchCallback(u.progress_callback,{status:"progress",...d,name:y,file:n})});return e&&o&&await c.match(o)===void 0&&await c.put(o,e).catch(d=>{console.warn(`Unable to add response to browser cache: ${d}.`)}),dispatchCallback(u.progress_callback,{status:"done",name:y,file:n}),i}async function getModelJSON(y,n,a=!0,u={}){let c=await getModelFile(y,n,a,u);if(c===null)return{};let s=new TextDecoder("utf-8").decode(c);return JSON.parse(s)}async function readResponse(y,n){const a=y.headers.get("Content-Length");a===null&&console.warn("Unable to determine content-length from response headers. Will expand buffer when needed.");let u=parseInt(a??"0"),c=new Uint8Array(u),p=0;const s=y.body.getReader();async function h(){const{done:f,value:l}=await s.read();if(f)return;let o=p+l.length;if(o>u){u=o;let e=new Uint8Array(u);e.set(c),c=e}c.set(l,p),p=o;const t=p/u*100;return n({progress:t,loaded:p,total:u}),h()}return await h(),c}function pathJoin(...y){return y=y.map((n,a)=>(a&&(n=n.replace(new RegExp("^/"),"")),a!==y.length-1&&(n=n.replace(new RegExp("/$"),"")),n)),y.join("/")}function interpolate_data(y,[n,a,u],[c,p],s="bilinear",h=!1){const f=p/u,l=c/a,o=new y.constructor(c*p*n),t=a*u,e=c*p;for(let r=0;r<c;++r)for(let i=0;i<p;++i){const d=r*p+i,g=(i+.5)/f-.5,m=(r+.5)/l-.5;let b=Math.floor(g),_=Math.floor(m);const v=Math.min(b+1,u-1),w=Math.min(_+1,a-1);b=Math.max(b,0),_=Math.max(_,0);const S=g-b,A=m-_,O=(1-S)*(1-A),x=S*(1-A),I=(1-S)*A,N=S*A,B=_*u,L=w*u,F=B+b,H=B+v,D=L+b,j=L+v;for(let Z=0;Z<n;++Z){const X=Z*t;o[Z*e+d]=O*y[X+F]+x*y[X+H]+I*y[X+D]+N*y[X+j]}}return o}function transpose_data(y,n,a){const u=new Array(a.length),c=new Array(a.length);for(let h=a.length-1,f=1;h>=0;--h)c[h]=f,u[h]=n[a[h]],f*=u[h];const p=a.map((h,f)=>c[a.indexOf(f)]),s=new y.constructor(y.length);for(let h=0;h<y.length;++h){let f=0;for(let l=n.length-1,o=h;l>=0;--l)f+=o%n[l]*p[l],o=Math.floor(o/n[l]);s[f]=y[h]}return[s,u]}function softmax(y){const n=max(y)[0],a=y.map(p=>Math.exp(p-n)),u=a.reduce((p,s)=>p+s,0);return a.map(p=>p/u)}function log_softmax(y){return softmax(y).map(u=>Math.log(u))}function getTopItems(y,n=0){return y=Array.from(y).map((a,u)=>[u,a]).sort((a,u)=>u[1]-a[1]),n>0&&(y=y.slice(0,n)),y}function min(y){if(y.length===0)throw Error("Array must not be empty");let n=y[0],a=0;for(let u=1;u<y.length;++u)y[u]<n&&(n=y[u],a=u);return[n,a]}function max(y){if(y.length===0)throw Error("Array must not be empty");let n=y[0],a=0;for(let u=1;u<y.length;++u)y[u]>n&&(n=y[u],a=u);return[n,a]}function rfftfreq(y,n=1){if(!Number.isInteger(y))throw new TypeError(`n should be an integer, but ${y} given.`);const a=1/(y*n),u=Math.floor(y/2)+1,c=new Array(u);for(let p=0;p<u;++p)c[p]=p*a;return c}class FFT{constructor(n){if(this.size=n|0,this.size<=1||this.size&this.size-1)throw new Error("FFT size must be a power of two and bigger than 1");this._csize=n<<1,this.table=new Float32Array(this.size*2);for(let u=0;u<this.table.length;u+=2){const c=Math.PI*u/this.size;this.table[u]=Math.cos(c),this.table[u+1]=-Math.sin(c)}let a=0;for(let u=1;this.size>u;u<<=1)++a;this._width=a%2===0?a-1:a,this._bitrev=new Int32Array(1<<this._width);for(let u=0;u<this._bitrev.length;++u){this._bitrev[u]=0;for(let c=0;c<this._width;c+=2){const p=this._width-c-2;this._bitrev[u]|=(u>>>c&3)<<p}}}createComplexArray(){return new Float32Array(this._csize)}fromComplexArray(n,a){const u=a||new Array(n.length>>>1);for(let c=0;c<n.length;c+=2)u[c>>>1]=n[c];return u}toComplexArray(n,a){const u=a||this.createComplexArray();for(let c=0;c<u.length;c+=2)u[c]=n[c>>>1],u[c+1]=0;return u}completeSpectrum(n){const a=this._csize,u=a>>>1;for(let c=2;c<u;c+=2)n[a-c]=n[c],n[a-c+1]=-n[c+1]}transform(n,a){if(n===a)throw new Error("Input and output buffers must be different");this._transform4(n,a,1)}realTransform(n,a){if(n===a)throw new Error("Input and output buffers must be different");this._realTransform4(n,a,1)}inverseTransform(n,a){if(n===a)throw new Error("Input and output buffers must be different");this._transform4(n,a,-1);for(let u=0;u<n.length;++u)n[u]/=this.size}_transform4(n,a,u){const c=this._csize;let s=1<<this._width,h=c/s<<1,f,l,o=this._bitrev;if(h===4)for(f=0,l=0;f<c;f+=h,++l){const t=o[l];this._singleTransform2(a,n,f,t,s)}else for(f=0,l=0;f<c;f+=h,++l){const t=o[l];this._singleTransform4(a,n,f,t,s,u)}for(s>>=2;s>=2;s>>=2){h=c/s<<1;let t=h>>>2;for(f=0;f<c;f+=h){let e=f+t;for(let r=f,i=0;r<e;r+=2,i+=s){const d=r,g=d+t,m=g+t,b=m+t,_=n[d],v=n[d+1],w=n[g],S=n[g+1],A=n[m],O=n[m+1],x=n[b],I=n[b+1],N=this.table[i],B=u*this.table[i+1],L=w*N-S*B,F=w*B+S*N,H=this.table[2*i],D=u*this.table[2*i+1],j=A*H-O*D,Z=A*D+O*H,X=this.table[3*i],J=u*this.table[3*i+1],ee=x*X-I*J,ue=x*J+I*X,Ae=_+j,ve=v+Z,oe=_-j,_e=v-Z,be=L+ee,ke=F+ue,Fe=u*(L-ee),xe=u*(F-ue);n[d]=Ae+be,n[d+1]=ve+ke,n[g]=oe+xe,n[g+1]=_e-Fe,n[m]=Ae-be,n[m+1]=ve-ke,n[b]=oe-xe,n[b+1]=_e+Fe}}}}_singleTransform2(n,a,u,c,p){const s=n[c],h=n[c+1],f=n[c+p],l=n[c+p+1];a[u]=s+f,a[u+1]=h+l,a[u+2]=s-f,a[u+3]=h-l}_singleTransform4(n,a,u,c,p,s){const h=p*2,f=p*3,l=n[c],o=n[c+1],t=n[c+p],e=n[c+p+1],r=n[c+h],i=n[c+h+1],d=n[c+f],g=n[c+f+1],m=l+r,b=o+i,_=l-r,v=o-i,w=t+d,S=e+g,A=s*(t-d),O=s*(e-g);a[u]=m+w,a[u+1]=b+S,a[u+2]=_+O,a[u+3]=v-A,a[u+4]=m-w,a[u+5]=b-S,a[u+6]=_-O,a[u+7]=v+A}_realTransform4(n,a,u){const c=this._csize;let s=1<<this._width,h=c/s<<1;var f,l,o=this._bitrev;if(h===4)for(f=0,l=0;f<c;f+=h,++l){const t=o[l];this._singleRealTransform2(a,n,f,t>>>1,s>>>1)}else for(f=0,l=0;f<c;f+=h,++l){const t=o[l];this._singleRealTransform4(a,n,f,t>>>1,s>>>1,u)}for(s>>=2;s>=2;s>>=2){h=c/s<<1;const t=h>>>1,e=t>>>1,r=e>>>1;for(f=0;f<c;f+=h)for(let i=0,d=0;i<=r;i+=2,d+=s){const g=f+i,m=g+e,b=m+e,_=b+e,v=n[g],w=n[g+1],S=n[m],A=n[m+1],O=n[b],x=n[b+1],I=n[_],N=n[_+1],B=this.table[d],L=u*this.table[d+1],F=S*B-A*L,H=S*L+A*B,D=this.table[2*d],j=u*this.table[2*d+1],Z=O*D-x*j,X=O*j+x*D,J=this.table[3*d],ee=u*this.table[3*d+1],ue=I*J-N*ee,Ae=I*ee+N*J,ve=v+Z,oe=w+X,_e=v-Z,be=w-X,ke=F+ue,Fe=H+Ae,xe=u*(F-ue),Ne=u*(H-Ae);if(n[g]=ve+ke,n[g+1]=oe+Fe,n[m]=_e+Ne,n[m+1]=be-xe,i===0){n[b]=ve-ke,n[b+1]=oe-Fe;continue}if(i===r)continue;const Ce=f+e-i,Ee=f+t-i;n[Ce]=_e+-u*Ne,n[Ce+1]=-be-u*xe,n[Ee]=ve+-u*ke,n[Ee+1]=-oe+u*Fe}}}_singleRealTransform2(n,a,u,c,p){const s=n[c],h=n[c+p];a[u]=s+h,a[u+1]=0,a[u+2]=s-h,a[u+3]=0}_singleRealTransform4(n,a,u,c,p,s){const h=p*2,f=p*3,l=n[c],o=n[c+p],t=n[c+h],e=n[c+f],r=l+t,i=l-t,d=o+e,g=s*(o-e);a[u]=r+d,a[u+1]=0,a[u+2]=i,a[u+3]=-g,a[u+4]=r-d,a[u+5]=0,a[u+6]=i,a[u+7]=g}}const ONNXTensor$1=ONNX.Tensor;class Tensor extends ONNXTensor$1{constructor(...n){return n[0]instanceof ONNX.Tensor?super(n[0].type,n[0].data,n[0].dims):super(...n),new Proxy(this,{get:(a,u)=>{if(typeof u=="string"){let c=Number(u);if(Number.isInteger(c))return a._getitem(c)}return a[u]},set:(a,u,c)=>a[u]=c})}*[Symbol.iterator](){const[n,...a]=this.dims;if(a.length>0){const u=a.reduce((c,p)=>c*p);for(let c=0;c<n;++c)yield this._subarray(c,u,a)}else yield*this.data}_getitem(n){const[a,...u]=this.dims;if(n>=a||n<-a)throw new Error(`Index ${n} is out of bounds for dimension 0 with size ${a}`);if(n<0&&(n+=a),u.length>0){const c=u.reduce((p,s)=>p*s);return this._subarray(n,c,u)}else return new Tensor(this.type,[this.data[n]],u)}indexOf(n){for(let a=0;a<this.data.length;++a)if(this.data[a]==n)return a;return-1}_subarray(n,a,u){let c=this.data.subarray(n*a,(n+1)*a);return new Tensor(this.type,c,u)}item(){if(this.data.length!==1)throw new Error(`a Tensor with ${this.data.length} elements cannot be converted to Scalar`);return this.data[0]}tolist(){return reshape(this.data,this.dims)}sigmoid(){return this.clone().sigmoid_()}sigmoid_(){for(let n=0;n<this.data.length;++n)this.data[n]=1/(1+Math.exp(-this.data[n]));return this}clone(){return new Tensor(this.type,this.data.slice(),this.dims.slice())}slice(...n){let a=[],u=[];for(let f=0;f<this.dims.length;++f){let l=n[f];if(l==null)u.push([0,this.dims[f]]),a.push(this.dims[f]);else if(typeof l=="number"){if(l<-this.dims[f]||l>=this.dims[f])throw new Error(`IndexError: index ${l} is out of bounds for dimension ${f} with size ${this.dims[f]}`);l<0&&(l+=this.dims[f]),u.push([l,l+1])}else if(Array.isArray(l)&&l.length===2){if(l[0]>l[1])throw new Error(`Invalid slice: ${l}`);let o=[Math.max(l[0],0),Math.min(l[1],this.dims[f])];u.push(o),a.push(o[1]-o[0])}else throw new Error(`Invalid slice: ${l}`)}let c=u.map(([f,l])=>l-f),p=c.reduce((f,l)=>f*l),s=new this.data.constructor(p);const h=new Array(this.dims.length);for(let f=c.length-1,l=1;f>=0;--f)h[f]=l,l*=this.dims[f];for(let f=0;f<p;++f){let l=0;for(let o=c.length-1,t=f;o>=0;--o){const e=c[o];l+=(t%e+u[o][0])*h[o],t=Math.floor(t/e)}s[f]=this.data[l]}return new Tensor(this.type,s,a)}transpose(...n){return transpose(this,n)}sum(n=null,a=!1){return this.norm(1,n,a)}norm(n="fro",a=null,u=!1){if(n==="fro")n=2;else if(typeof n=="string")throw Error(`Unsupported norm: ${n}`);if(a===null){let s=this.data.reduce((h,f)=>h+f**n,0)**(1/n);return new Tensor(this.type,[s],[1])}a<0&&(a+=this.dims.length);const c=this.dims.slice();c[a]=1;const p=new this.data.constructor(this.data.length/this.dims[a]);for(let s=0;s<this.data.length;++s){let h=0;for(let f=this.dims.length-1,l=s,o=1;f>=0;--f){const t=this.dims[f];if(f!==a){const e=l%t;h+=e*o,o*=c[f]}l=Math.floor(l/t)}p[h]+=this.data[s]**n}if(n!==1)for(let s=0;s<p.length;++s)p[s]=p[s]**(1/n);return u||c.splice(a,1),new Tensor(this.type,p,c)}normalize_(n=2,a=1){a<0&&(a+=this.dims.length);const u=this.norm(n,a,!0);for(let c=0;c<this.data.length;++c){let p=0;for(let s=this.dims.length-1,h=c,f=1;s>=0;--s){const l=this.dims[s];if(s!==a){const o=h%l;p+=o*f,f*=this.dims[s]}h=Math.floor(h/l)}this.data[c]/=u.data[p]}return this}normalize(n=2,a=1){return this.clone().normalize_(n,a)}stride(){const n=new Array(this.dims.length);for(let a=this.dims.length-1,u=1;a>=0;--a)n[a]=u,u*=this.dims[a];return n}squeeze(n=null){return new Tensor(this.type,this.data,calc_squeeze_dims(this.dims,n))}squeeze_(n=null){return this.dims=calc_squeeze_dims(this.dims,n),this}unsqueeze(n=null){return new Tensor(this.type,this.data,calc_unsqueeze_dims(this.dims,n))}unsqueeze_(n=null){return this.dims=calc_unsqueeze_dims(this.dims,n),this}flatten_(n=0,a=-1){a=(a+this.dims.length)%this.dims.length;let u=this.dims.slice(0,n),c=this.dims.slice(n,a+1),p=this.dims.slice(a+1);return this.dims=[...u,c.reduce((s,h)=>s*h,1),...p],this}flatten(n=0,a=-1){return this.clone().flatten_(n,a)}view(...n){let a=-1;for(let u=0;u<n.length;++u)if(n[u]===-1){if(a!==-1)throw new Error("Only one dimension can be inferred");a=u}if(a!==-1){const u=n.reduce((c,p,s)=>s!==a?c*p:c,1);n[a]=this.data.length/u}return new Tensor(this.type,this.data,n)}}function reshape(y,n){const a=y.length,u=n.reduce((p,s)=>p*s);if(a!==u)throw Error(`cannot reshape array of size ${a} into shape (${n})`);let c=y;for(let p=n.length-1;p>=0;p--)c=c.reduce((s,h)=>{let f=s[s.length-1];return f.length<n[p]?f.push(h):s.push([h]),s},[[]]);return c[0]}function transpose(y,n){const[a,u]=transpose_data(y.data,y.dims,n);return new Tensor(y.type,a,u)}function cat(y){if(y.length===0)return y[0];let n=y[0].type,a=[...y[0].dims];a[0]=y.length;let u=0;for(let s of y)u+=s.data.length;let c=new y[0].data.constructor(u),p=0;for(let s of y)c.set(s.data,p),p+=s.data.length;return new Tensor(n,c,a)}function interpolate(y,[n,a],u="bilinear",c=!1){const p=y.dims.at(-3)??1,s=y.dims.at(-2),h=y.dims.at(-1);let f=interpolate_data(y.data,[p,s,h],[n,a],u,c);return new Tensor(y.type,f,[p,n,a])}function mean_pooling(y,n){let a=[y.dims[0],y.dims[2]],u=new y.data.constructor(a[0]*a[1]),[c,p,s]=y.dims,h=0;for(let f=0;f<c;++f){let l=f*s*p;for(let o=0;o<s;++o){let t=0,e=0,r=f*p,i=l+o;for(let g=0;g<p;++g){let m=Number(n.data[r+g]);e+=m,t+=y.data[i+g*s]*m}let d=t/e;u[h++]=d}}return new Tensor(y.type,u,a)}function calc_squeeze_dims(y,n){return y=y.slice(),n===null?y=y.filter(a=>a!==1):typeof n=="number"?y[n]===1&&y.splice(n,1):Array.isArray(n)&&(y=y.filter((a,u)=>a!==1||!n.includes(u))),y}function calc_unsqueeze_dims(y,n){return y=y.slice(),n<0&&(n=(n%y.length+y.length)%y.length),y.splice(n,0,1),y}async function loadTokenizer(y,n){return await Promise.all([getModelJSON(y,"tokenizer.json",!0,n),getModelJSON(y,"tokenizer_config.json",!0,n)])}function createPattern(y,n=!0){return y.Regex?new RegExp(n?y.Regex:`(${y.Regex})`,"gu"):y.String?y.String:(console.warn("Unknown pattern type:",y),null)}function clean_up_tokenization(y){return y.replace(/ \./g,".").replace(/ \?/g,"?").replace(/ \!/g,"!").replace(/ ,/g,",").replace(/ \' /g,"'").replace(/ n\'t/g,"n't").replace(/ \'m/g,"'m").replace(/ \'s/g,"'s").replace(/ \'ve/g,"'ve").replace(/ \'re/g,"'re")}function fuse(y,n){let a=[],u=0;for(;u<y.length;){if(a.push(y[u]),y[u]!==n){++u;continue}for(;u<y.length&&y[u]===n;)++u}return a}function whitespace_split(y){return y.match(/\S+/g)||[]}const PUNCTUATION_REGEX="\\p{P}\\u0021-\\u002F\\u003A-\\u0040\\u005B-\\u0060\\u007B-\\u007E";class TokenizerModel extends Callable{constructor(n){super(),this.config=n,this.vocab=[],this.tokens_to_ids=new Map,this.unk_token_id=void 0,this.unk_token=void 0,this.end_of_word_suffix=void 0,this.fuse_unk=!1}static fromConfig(n,...a){switch(n.type){case"WordPiece":return new WordPieceTokenizer(n);case"Unigram":return new Unigram(n,...a);case"BPE":return new BPE(n,...a);default:throw new Error(`Unknown TokenizerModel type: ${n.type}`)}}_call(n){return this.encode(n)}encode(n){throw Error("encode should be implemented in subclass.")}convert_tokens_to_ids(n){let a=n.map(u=>this.tokens_to_ids.get(u)??this.unk_token_id);return this.fuse_unk&&(a=fuse(a,this.unk_token_id)),a}convert_ids_to_tokens(n){return n.map(a=>this.vocab[a]??this.unk_token)}}class WordPieceTokenizer extends TokenizerModel{constructor(n){super(n),this.tokens_to_ids=n.vocab,this.unk_token_id=this.tokens_to_ids.get(n.unk_token),this.unk_token=n.unk_token,this.vocab=new Array(this.tokens_to_ids.size);for(const[a,u]of this.tokens_to_ids)this.vocab[u]=a}encode(n){let a=[];for(let u of n){let c=[...u],p=!1,s=0,h=[];for(;s<c.length;){let f=c.length,l=null;for(;s<f;){let o=c.slice(s,f).join("");if(s>0&&(o=this.config.continuing_subword_prefix+o),this.tokens_to_ids.has(o)){l=o;break}--f}if(l===null){p=!0;break}h.push(l),s=f}p?a.push(this.unk_token):a.push(...h)}return a}}class Unigram extends TokenizerModel{constructor(n,a){super(n),this.vocab=new Array(n.vocab.size),this.scores=new Array(n.vocab.size);let u=0;n.vocab.forEach((c,p)=>{this.vocab[u]=p,this.scores[u]=c,++u}),this.unk_token_id=n.unk_id,this.unk_token=this.vocab[n.unk_id],this.tokens_to_ids=new Map(this.vocab.map((c,p)=>[c,p])),this.bosToken=" ",this.bosTokenId=this.tokens_to_ids.get(this.bosToken),this.eosToken=a.eos_token,this.eosTokenId=this.tokens_to_ids.get(this.eosToken),this.unkToken=this.vocab[this.unk_token_id],this.minScore=min(this.scores)[0],this.unkScore=this.minScore-10,this.scores[this.unk_token_id]=this.unkScore,this.trie=new CharTrie,this.trie.extend(this.vocab),this.fuse_unk=!0}populateNodes(n){const a=n.sentence,u=a.length;let c=0;for(;c<u;){let s=!1;for(let h of this.trie.commonPrefixSearch(a.slice(c))){const f=this.tokens_to_ids.get(h),l=this.scores[f],o=h.length;n.insert(c,o,l,f),!s&&o===1&&(s=!0)}s||n.insert(c,1,this.unkScore,this.unk_token_id),c+=1}}tokenize(n){const a=new TokenLattice(n,this.bosTokenId,this.eosTokenId);return this.populateNodes(a),a.tokens()}encode(n){let a=[];for(let u of n){const c=this.tokenize(u);a.push(...c)}return a}}const BYTES_TO_UNICODE=(()=>{const y=[...Array.from({length:"~".charCodeAt(0)-"!".charCodeAt(0)+1},(c,p)=>p+"!".charCodeAt(0)),...Array.from({length:"¬".charCodeAt(0)-"¡".charCodeAt(0)+1},(c,p)=>p+"¡".charCodeAt(0)),...Array.from({length:"ÿ".charCodeAt(0)-"®".charCodeAt(0)+1},(c,p)=>p+"®".charCodeAt(0))];let n=y.slice(),a=0;for(let c=0;c<256;++c)y.includes(c)||(y.push(c),n.push(256+a),a+=1);let u=n.map(c=>String.fromCharCode(c));return Object.fromEntries(y.map((c,p)=>[c,u[p]]))})(),UNICODE_TO_BYTES=reverseDictionary(BYTES_TO_UNICODE);class BPE extends TokenizerModel{constructor(n){super(n),this.tokens_to_ids=n.vocab,this.unk_token_id=this.tokens_to_ids.get(n.unk_token),this.unk_token=n.unk_token,this.vocab=new Array(this.tokens_to_ids.size);for(const[a,u]of this.tokens_to_ids)this.vocab[u]=a;this.bpe_ranks=Object.fromEntries(n.merges.map((a,u)=>[a,u])),this.merges=n.merges.map(a=>a.split(/\s+/)),this.end_of_word_suffix=n.end_of_word_suffix,this.byte_fallback=this.config.byte_fallback??!1,this.byte_fallback&&(this.text_encoder=new TextEncoder),this.cache=Object.create(null),this.fuse_unk??(this.fuse_unk=this.config.fuse_unk)}get_pairs(n){let a=new Set,u=n[0];for(let c=1;c<n.length;++c){let p=n[c];a.add(`${u} ${p}`),u=p}return Array.from(a)}bpe(n){if(n in this.cache)return this.cache[n];let a=Array.from(n);this.end_of_word_suffix&&(a[a.length-1]+=this.end_of_word_suffix);let u=this.get_pairs(a);if(!u.length)return this.end_of_word_suffix&&(n+=this.end_of_word_suffix),n;for(;;){let p=u.reduce((t,e)=>{let r=this.bpe_ranks[t]??1/0,i=this.bpe_ranks[e]??1/0;return r<=i?t:e});if(!(p in this.bpe_ranks))break;let[s,h]=p.split(/\s+/g),f=[],l=0,o=-1;for(;l<a.length;){try{if(o=a.indexOf(s,l),o===-1)throw"Error"}catch{f.push(...a.slice(l));break}f.push(...a.slice(l,o)),l=o,a[l]===s&&l<a.length-1&&a[l+1]===h?(f.push(s+h),l+=2):(f.push(a[l]),l+=1)}if(a=f,a.length===1)break;u=this.get_pairs(a)}let c=a.join(" ");return this.cache[n]=c,c}encode(n){let a=[];for(let u of n){let c=this.bpe(u).split(" ");for(let p of c)this.tokens_to_ids.has(p)?a.push(p):this.byte_fallback?a.push(...Array.from(this.text_encoder.encode(p)).map(s=>`<0x${s.toString(16).toUpperCase().padStart(2,"0")}>`)):a.push(this.unk_token)}return a}}class Normalizer extends Callable{constructor(n){super(),this.config=n}static fromConfig(n){if(n===null)return null;switch(n.type){case"BertNormalizer":return new BertNormalizer(n);case"Precompiled":return new Precompiled(n);case"Sequence":return new NormalizerSequence(n);case"Replace":return new Replace(n);case"NFC":return new NFC(n);case"NFKD":return new NFKD(n);case"StripAccents":return new StripAccents(n);case"Lowercase":return new Lowercase(n);case"Prepend":return new Prepend(n);default:throw new Error(`Unknown Normalizer type: ${n.type}`)}}normalize(n){throw Error("normalize should be implemented in subclass.")}_call(n){return this.normalize(n)}}class Replace extends Normalizer{normalize(n){let a=createPattern(this.config.pattern);return a===null||(n=n.replaceAll(a,this.config.content)),n}}class NFC extends Normalizer{normalize(n){return n=n.normalize("NFC"),n}}class NFKD extends Normalizer{normalize(n){return n=n.normalize("NFKD"),n}}class StripAccents extends Normalizer{normalize(n){return n=n.replace(/[\u0300-\u036f]/g,""),n}}class Lowercase extends Normalizer{normalize(n){return n=n.toLowerCase(),n}}class Prepend extends Normalizer{normalize(n){return n=this.config.prepend+n,n}}class NormalizerSequence extends Normalizer{constructor(n){super(n),this.normalizers=n.normalizers.map(a=>Normalizer.fromConfig(a))}normalize(n){return this.normalizers.reduce((a,u)=>u.normalize(a),n)}}class BertNormalizer extends Normalizer{_tokenize_chinese_chars(n){let a=[];for(let u=0;u<n.length;++u){let c=n[u],p=c.charCodeAt(0);this._is_chinese_char(p)?(a.push(" "),a.push(c),a.push(" ")):a.push(c)}return a.join("")}_is_chinese_char(n){return n>=19968&&n<=40959||n>=13312&&n<=19903||n>=131072&&n<=173791||n>=173824&&n<=177983||n>=177984&&n<=178207||n>=178208&&n<=183983||n>=63744&&n<=64255||n>=194560&&n<=195103}stripAccents(n){return n.normalize("NFD").replace(/[\u0300-\u036f]/g,"")}normalize(n){return this.config.handle_chinese_chars&&(n=this._tokenize_chinese_chars(n)),this.config.lowercase?(n=n.toLowerCase(),this.config.strip_accents!==!1&&(n=this.stripAccents(n))):this.config.strip_accents&&(n=this.stripAccents(n)),n}}class PreTokenizer extends Callable{static fromConfig(n){if(n===null)return null;switch(n.type){case"BertPreTokenizer":return new BertPreTokenizer(n);case"Sequence":return new PreTokenizerSequence(n);case"WhitespaceSplit":return new WhitespaceSplit(n);case"Metaspace":return new MetaspacePreTokenizer(n);case"ByteLevel":return new ByteLevelPreTokenizer(n);case"Split":return new SplitPreTokenizer(n);case"Punctuation":return new PunctuationPreTokenizer(n);case"Digits":return new DigitsPreTokenizer(n);default:throw new Error(`Unknown PreTokenizer type: ${n.type}`)}}pre_tokenize_text(n){throw Error("pre_tokenize_text should be implemented in subclass.")}pre_tokenize(n){let a=[];return Array.isArray(n)?a=n.map(u=>this.pre_tokenize_text(u)):a=this.pre_tokenize_text(n),a.flat()}_call(n){return this.pre_tokenize(n)}}class BertPreTokenizer extends PreTokenizer{constructor(n){super(),this.pattern=new RegExp(`[^\\s${PUNCTUATION_REGEX}]+|[${PUNCTUATION_REGEX}]`,"gu")}pre_tokenize_text(n){return n.trim().match(this.pattern)||[]}}class ByteLevelPreTokenizer extends PreTokenizer{constructor(n){super(),this.config=n,this.add_prefix_space=this.config.add_prefix_space,this.trim_offsets=this.config.trim_offsets,this.use_regex=this.config.use_regex??!0,this.pattern=/'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+/gu,this.byte_encoder=BYTES_TO_UNICODE,this.text_encoder=new TextEncoder}pre_tokenize_text(n){return(this.use_regex?n.match(this.pattern)||[]:[n]).map(u=>(this.add_prefix_space&&!u.startsWith(" ")&&(u=" "+u),u=Array.from(this.text_encoder.encode(u),c=>this.byte_encoder[c]).join(""),u))}}class SplitPreTokenizer extends PreTokenizer{constructor(n){super(),this.config=n,this.pattern=createPattern(this.config.pattern,this.config.invert)}pre_tokenize_text(n){return this.pattern===null?[]:this.config.invert?n.match(this.pattern)||[]:n.split(this.pattern).filter(a=>a)}}class PunctuationPreTokenizer extends PreTokenizer{constructor(n){super(),this.config=n,this.pattern=new RegExp(`[^${PUNCTUATION_REGEX}]+|[${PUNCTUATION_REGEX}]+`,"gu")}pre_tokenize_text(n){return n.match(this.pattern)||[]}}class DigitsPreTokenizer extends PreTokenizer{constructor(n){super(),this.config=n;const a=`[^\\d]+|\\d${this.config.individual_digits?"":"+"}`;this.pattern=new RegExp(a,"gu")}pre_tokenize_text(n){return n.match(this.pattern)||[]}}class PostProcessor extends Callable{constructor(n){super(),this.config=n}static fromConfig(n){if(n===null)return null;switch(n.type){case"TemplateProcessing":return new TemplateProcessing(n);case"ByteLevel":return new ByteLevelPostProcessor(n);case"RobertaProcessing":return new RobertaProcessing(n);default:throw new Error(`Unknown PostProcessor type: ${n.type}`)}}post_process(n,...a){throw Error("post_process should be implemented in subclass.")}_call(n,...a){return this.post_process(n,...a)}}class RobertaProcessing extends PostProcessor{constructor(n){super(n),this.cls=n.cls[0],this.sep=n.sep[0]}post_process(n,a=null){return n=mergeArrays([this.cls],n,[this.sep]),a!==null&&(n=mergeArrays(n,[this.sep],a,[this.sep])),n}}class TemplateProcessing extends PostProcessor{constructor(n){super(n),this.single=n.single,this.pair=n.pair}post_process(n,a=null){let u=a===null?this.single:this.pair,c=[];for(let p of u)"SpecialToken"in p?c.push(p.SpecialToken.id):"Sequence"in p&&(p.Sequence.id==="A"?c=mergeArrays(c,n):p.Sequence.id==="B"&&(c=mergeArrays(c,a)));return c}}class ByteLevelPostProcessor extends PostProcessor{post_process(n){return n}}class Decoder extends Callable{constructor(n){super(),this.config=n,this.added_tokens=[],this.end_of_word_suffix=null,this.trim_offsets=n.trim_offsets}static fromConfig(n){switch(n.type){case"WordPiece":return new WordPieceDecoder(n);case"Metaspace":return new MetaspaceDecoder(n);case"ByteLevel":return new ByteLevelDecoder(n);case"Replace":return new ReplaceDecoder(n);case"ByteFallback":return new ByteFallback(n);case"Fuse":return new FuseDecoder(n);case"Strip":return new StripDecoder(n);case"Sequence":return new DecoderSequence(n);default:throw new Error(`Unknown Decoder type: ${n.type}`)}}_call(n){return this.decode(n)}decode(n){return this.decode_chain(n).join("")}decode_chain(n){throw Error("`decode_chain` should be implemented in subclass.")}}class ReplaceDecoder extends Decoder{constructor(n){super(n)}decode_chain(n){let a=createPattern(this.config.pattern);return a===null?n:n.map(u=>u.replaceAll(a,this.config.content))}}class ByteFallback extends Decoder{constructor(n){super(n),this.text_decoder=new TextDecoder}decode_chain(n){let a=[],u=[];for(let c of n){let p=null;if(c.length===6&&c.startsWith("<0x")&&c.endsWith(">")){let s=parseInt(c.slice(3,5),16);isNaN(s)||(p=s)}if(p!==null)u.push(p);else{if(u.length>0){let s=this.text_decoder.decode(Uint8Array.from(u));a.push(s),u=[]}a.push(c)}}if(u.length>0){let c=this.text_decoder.decode(Uint8Array.from(u));a.push(c),u=[]}return a}}class FuseDecoder extends Decoder{constructor(n){super(n)}decode_chain(n){return[n.join("")]}}class StripDecoder extends Decoder{constructor(n){super(n),this.content=this.config.content,this.start=this.config.start,this.stop=this.config.stop}decode_chain(n){return n.map(a=>{let u=0;for(let p=0;p<this.start&&a[p]===this.content;++p){u=p+1;continue}let c=a.length;for(let p=0;p<this.stop;++p){const s=a.length-p-1;if(a[s]===this.content){c=s;continue}else break}return a.slice(u,c)})}}class WordPieceDecoder extends Decoder{constructor(n){super(n),this.cleanup=n.cleanup}decode_chain(n){return n.map((a,u)=>(u!==0&&(a.startsWith(this.config.prefix)?a=a.replace(this.config.prefix,""):a=" "+a),this.cleanup&&(a=clean_up_tokenization(a)),a))}}class ByteLevelDecoder extends Decoder{constructor(n){super(n),this.byte_decoder=UNICODE_TO_BYTES,this.text_decoder=new TextDecoder("utf-8",{fatal:!1,ignoreBOM:!0}),this.end_of_word_suffix=null}convert_tokens_to_string(n){let a=n.join(""),u=new Uint8Array([...a].map(p=>this.byte_decoder[p]));return this.text_decoder.decode(u)}decode_chain(n){let a=[],u=[];for(let c of n)this.added_tokens.includes(c)?(u.length>0&&(a.push(this.convert_tokens_to_string(u)),u=[]),a.push(c)):u.push(c);return u.length>0&&a.push(this.convert_tokens_to_string(u)),a}}class DecoderSequence extends Decoder{constructor(n){super(n),this.decoders=n.decoders.map(a=>Decoder.fromConfig(a))}decode_chain(n){return this.decoders.reduce((a,u)=>u.decode_chain(a),n)}}class MetaspacePreTokenizer extends PreTokenizer{constructor(n){super(),this.addPrefixSpace=n.add_prefix_space,this.replacement=n.replacement,this.strRep=n.str_rep||this.replacement}pre_tokenize(n){typeof n=="string"&&(n=n.trimStart().split(/\s+/));const a=[];for(let u of n){let c=u.replaceAll(" ",this.strRep);this.addPrefixSpace&&!c.startsWith(this.replacement)&&(c=this.strRep+c),a.push(c)}return a}}class MetaspaceDecoder extends Decoder{constructor(n){super(n),this.addPrefixSpace=n.add_prefix_space,this.replacement=n.replacement}decode_chain(n){let a=[];for(let u=0;u<n.length;++u){let c=n[u].replaceAll(this.replacement," ");this.addPrefixSpace&&u==0&&c.startsWith(" ")&&(c=c.substring(1)),a.push(c)}return a}}class Precompiled extends Normalizer{constructor(n){super(n),this.charsmap=n.precompiled_charsmap}normalize(n){return n}}class PreTokenizerSequence extends PreTokenizer{constructor(n){super(),this.tokenizers=n.pretokenizers.map(a=>PreTokenizer.fromConfig(a))}pre_tokenize_text(n){return typeof n=="string"&&(n=[n]),this.tokenizers.reduce((a,u)=>u.pre_tokenize(a),n)}}class WhitespaceSplit extends PreTokenizer{constructor(n){super()}pre_tokenize_text(n){return whitespace_split(n)}}class PreTrainedTokenizer extends Callable{constructor(n,a){super(),this.normalizer=Normalizer.fromConfig(n.normalizer),this.pre_tokenizer=PreTokenizer.fromConfig(n.pre_tokenizer),n.model.vocab&&(Array.isArray(n.model.vocab)||(n.model.vocab=Object.entries(n.model.vocab)),n.model.vocab=new Map(n.model.vocab)),this.model=TokenizerModel.fromConfig(n.model,a),this.post_processor=PostProcessor.fromConfig(n.post_processor),this.decoder=Decoder.fromConfig(n.decoder),this.decoder.end_of_word_suffix=this.model.end_of_word_suffix,this.special_tokens=[],this.all_special_ids=[],this.added_tokens=[];for(let u of n.added_tokens){let c=u.id,p=u.content;this.added_tokens.push(p),this.model.tokens_to_ids.set(p,c),this.model.vocab[c]=p,u.special&&(this.special_tokens.push(p),this.all_special_ids.push(c))}this.decoder.added_tokens=this.added_tokens,this.added_tokens_regex=new RegExp("("+this.added_tokens.map(escapeRegExp).join("|")+")"),this.mask_token=this.getToken(a,"mask_token"),this.mask_token_id=this.model.tokens_to_ids.get(this.mask_token),this.pad_token=this.getToken(a,"pad_token","eos_token"),this.pad_token_id=this.model.tokens_to_ids.get(this.pad_token),this.sep_token=this.getToken(a,"sep_token"),this.sep_token_id=this.model.tokens_to_ids.get(this.sep_token),this.model_max_length=a.model_max_length,this.remove_space=a.remove_space,this.clean_up_tokenization_spaces=a.clean_up_tokenization_spaces??!0,this.padding_side="right"}getToken(n,...a){for(let u of a){let c=n[u];if(c)if(typeof c=="object"){if(c.__type==="AddedToken")return c.content;throw Error(`Unknown token: ${c}`)}else return c}return null}static async from_pretrained(n,{progress_callback:a=null,config:u=null,cache_dir:c=null,local_files_only:p=!1,revision:s="main"}={}){let h=await loadTokenizer(n,{progress_callback:a,config:u,cache_dir:c,local_files_only:p,revision:s});return new this(...h)}prepare_model_inputs(n){return n}_call(n,{text_pair:a=null,padding:u=!1,truncation:c=null,max_length:p=null,return_tensor:s=!0}={}){let h;if(Array.isArray(n)){if(n.length===0)throw Error("text array must be non-empty");if(a!==null){if(Array.isArray(a)){if(n.length!==a.length)throw Error("text and text_pair must have the same length")}else throw Error("text_pair must also be an array");h=n.map((t,e)=>this.encode(t,a[e]))}else h=n.map(t=>this.encode(t))}else{if(n===null)throw Error("text may not be null");if(Array.isArray(a))throw Error("When specifying `text_pair`, since `text` is a string, `text_pair` must also be a string (i.e., not an array).");h=[this.encode(n,a)]}let f=max(h.map(t=>t.length))[0];p===null&&(p=f),p=Math.min(p,this.model_max_length);let l=[];if(u||c)for(let t=0;t<h.length;++t)if(h[t].length===p){l.push(new Array(h[t].length).fill(1));continue}else if(h[t].length>p)c&&(h[t]=h[t].slice(0,p)),l.push(new Array(h[t].length).fill(1));else if(u){let e=p-h[t].length;this.padding_side==="right"?(l.push(new Array(h[t].length).fill(1).concat(new Array(e).fill(0))),h[t].push(...new Array(e).fill(this.pad_token_id))):(l.push(new Array(e).fill(0).concat(new Array(h[t].length).fill(1))),h[t].unshift(...new Array(e).fill(this.pad_token_id)))}else l.push(new Array(h[t].length).fill(1));else l=h.map(t=>new Array(t.length).fill(1));if(s){if(!(u&&c)&&h.some(e=>e.length!==h[0].length))throw Error("Unable to create tensor, you should probably activate truncation and/or padding with 'padding=true' and 'truncation=true' to have batched tensors with the same length.");let t=[h.length,h[0].length];h=new Tensor("int64",BigInt64Array.from(h.flat().map(BigInt)),t),l=new Tensor("int64",BigInt64Array.from(l.flat().map(BigInt)),t)}else Array.isArray(n)||(h=h[0],l=l[0]);let o={input_ids:h,attention_mask:l};return o=this.prepare_model_inputs(o),o}_encode_text(n){return n===null?null:n.split(this.added_tokens_regex).filter(c=>c).map(c=>{if(this.added_tokens.includes(c))return c;{this.remove_space===!0&&(c=c.trim().split(/\s+/).join(" ")),this.normalizer!==null&&(c=this.normalizer(c));let p=this.pre_tokenizer!==null?this.pre_tokenizer(c):[c];return this.model(p)}}).flat()}encode(n,a=null){let u=this._encode_text(n),c=this._encode_text(a),p=this.post_processor!==null?this.post_processor(u,c):mergeArrays(u??[],c??[]);return this.model.convert_tokens_to_ids(p)}batch_decode(n,a={}){return n.map(u=>this.decode(u,a))}decode(n,a={}){if(!Array.isArray(n)||n.length===0||!isIntegralNumber(n[0]))throw Error("token_ids must be a non-empty array of integers.");return this.decode_single(n,a)}decode_single(n,{skip_special_tokens:a=!1,clean_up_tokenization_spaces:u=null}){let c=this.model.convert_ids_to_tokens(n);a&&(c=c.filter(s=>!this.special_tokens.includes(s)));let p=this.decoder(c);return this.decoder.end_of_word_suffix&&(p=p.replaceAll(this.decoder.end_of_word_suffix," "),a&&(p=p.trim())),(u??this.clean_up_tokenization_spaces)&&(p=clean_up_tokenization(p)),p}}function add_token_types(y){if(y.input_ids instanceof Tensor)y.token_type_ids=new Tensor("int64",new BigInt64Array(y.input_ids.data.length),y.input_ids.dims);else if(Array.isArray(y.input_ids))Array.isArray(y.input_ids[0])?y.token_type_ids=y.input_ids.map(n=>new Array(n.length).fill(0)):y.token_type_ids=new Array(y.input_ids.length).fill(0);else throw new Error("Input ids must be a Tensor or an Array");return y}class BertTokenizer extends PreTrainedTokenizer{prepare_model_inputs(n){return add_token_types(n)}}class AlbertTokenizer extends PreTrainedTokenizer{prepare_model_inputs(n){return add_token_types(n)}}class MobileBertTokenizer extends PreTrainedTokenizer{prepare_model_inputs(n){return add_token_types(n)}}class SqueezeBertTokenizer extends PreTrainedTokenizer{prepare_model_inputs(n){return add_token_types(n)}}class DistilBertTokenizer extends PreTrainedTokenizer{}class T5Tokenizer extends PreTrainedTokenizer{}class GPT2Tokenizer extends PreTrainedTokenizer{}class BartTokenizer extends PreTrainedTokenizer{}class RobertaTokenizer extends PreTrainedTokenizer{}class BloomTokenizer extends PreTrainedTokenizer{}class LlamaTokenizer extends PreTrainedTokenizer{}class XLMRobertaTokenizer extends PreTrainedTokenizer{}class MPNetTokenizer extends PreTrainedTokenizer{}class FalconTokenizer extends PreTrainedTokenizer{prepare_model_inputs(n){return add_token_types(n)}}class GPTNeoXTokenizer extends PreTrainedTokenizer{}class NllbTokenizer extends PreTrainedTokenizer{constructor(n,a){super(n,a),this.languageRegex=/^[a-z]{3}_[A-Z][a-z]{3}$/,this.language_codes=this.special_tokens.filter(u=>this.languageRegex.test(u))}_build_translation_inputs(n,a,u){if(!this.language_codes.includes(u.tgt_lang))throw new Error(`Target language code "${u.tgt_lang}" is not valid. Must be one of: {${this.language_codes.join(", ")}}`);if(u.src_lang!==void 0){if(!this.language_codes.includes(u.src_lang))throw new Error(`Source language code "${u.src_lang}" is not valid. Must be one of: {${this.language_codes.join(", ")}}`);for(let c of this.post_processor.config.single)if("SpecialToken"in c&&this.languageRegex.test(c.SpecialToken.id)){c.SpecialToken.id=u.src_lang;break}}return u.forced_bos_token_id=this.model.convert_tokens_to_ids([u.tgt_lang])[0],this._call(n,a)}}const WHISPER_LANGUAGES=[["en","english"],["zh","chinese"],["de","german"],["es","spanish"],["ru","russian"],["ko","korean"],["fr","french"],["ja","japanese"],["pt","portuguese"],["tr","turkish"],["pl","polish"],["ca","catalan"],["nl","dutch"],["ar","arabic"],["sv","swedish"],["it","italian"],["id","indonesian"],["hi","hindi"],["fi","finnish"],["vi","vietnamese"],["he","hebrew"],["uk","ukrainian"],["el","greek"],["ms","malay"],["cs","czech"],["ro","romanian"],["da","danish"],["hu","hungarian"],["ta","tamil"],["no","norwegian"],["th","thai"],["ur","urdu"],["hr","croatian"],["bg","bulgarian"],["lt","lithuanian"],["la","latin"],["mi","maori"],["ml","malayalam"],["cy","welsh"],["sk","slovak"],["te","telugu"],["fa","persian"],["lv","latvian"],["bn","bengali"],["sr","serbian"],["az","azerbaijani"],["sl","slovenian"],["kn","kannada"],["et","estonian"],["mk","macedonian"],["br","breton"],["eu","basque"],["is","icelandic"],["hy","armenian"],["ne","nepali"],["mn","mongolian"],["bs","bosnian"],["kk","kazakh"],["sq","albanian"],["sw","swahili"],["gl","galician"],["mr","marathi"],["pa","punjabi"],["si","sinhala"],["km","khmer"],["sn","shona"],["yo","yoruba"],["so","somali"],["af","afrikaans"],["oc","occitan"],["ka","georgian"],["be","belarusian"],["tg","tajik"],["sd","sindhi"],["gu","gujarati"],["am","amharic"],["yi","yiddish"],["lo","lao"],["uz","uzbek"],["fo","faroese"],["ht","haitian creole"],["ps","pashto"],["tk","turkmen"],["nn","nynorsk"],["mt","maltese"],["sa","sanskrit"],["lb","luxembourgish"],["my","myanmar"],["bo","tibetan"],["tl","tagalog"],["mg","malagasy"],["as","assamese"],["tt","tatar"],["haw","hawaiian"],["ln","lingala"],["ha","hausa"],["ba","bashkir"],["jw","javanese"],["su","sundanese"]],WHISPER_LANGUAGE_MAPPING=new Map(WHISPER_LANGUAGES),WHISPER_TO_LANGUAGE_CODE_MAPPING=new Map([...WHISPER_LANGUAGES.map(([y,n])=>[n,y]),["burmese","my"],["valencian","ca"],["flemish","nl"],["haitian","ht"],["letzeburgesch","lb"],["pushto","ps"],["panjabi","pa"],["moldavian","ro"],["moldovan","ro"],["sinhalese","si"],["castilian","es"]]);class WhisperTokenizer extends PreTrainedTokenizer{_decode_asr(n,{return_timestamps:a=!1,return_language:u=!1,time_precision:c=null,force_full_sequences:p=!0}={}){if(c===null)throw Error("Must specify time_precision");let s=null;function h(){return{language:s,timestamp:[null,null],text:""}}const f=[];let l=h(),o=0;const t=this.model.convert_tokens_to_ids(["<|notimestamps|>"])[0]+1;let e=[],r=!1,i=null;const d=new Set(this.all_special_ids);for(let b of n){const _=b.tokens;let v=null,w=t;if("stride"in b){const[A,O,x]=b.stride;if(o-=O,i=A-x,O&&(w=O/c+t),x)for(let I=_.length-1;I>=0;--I){const N=_[I];if(N>=t){if(v!==null&&(N-t)*c<i)break;v=N}}}let S=[];for(const A of _)if(d.has(A)){const O=this.decode([A]);if(O[0]==="["&&O[O.length-1]==="]"){const x=WHISPER_LANGUAGE_MAPPING.get(O.slice(1,-1));if(x!==void 0){if(s!==null&&x!==s&&!a){e.push(S);const I=this.findLongestCommonSequence(e),N=this.decode(I);l.text=N,f.push(l),e=[],S=[],l=h()}s=l.language=x}}}else if(A>=t){const O=(A-t)*c+o,x=Math.round(O*100)/100;if(v!==null&&A>=v)r=!0;else if(r||e.length>0&&A<w)r=!1;else if(l.timestamp[0]===null)l.timestamp[0]=x;else if(x!==l.timestamp[0]){l.timestamp[1]=x,e.push(S);const I=this.findLongestCommonSequence(e),N=this.decode(I);l.text=N,f.push(l),e=[],S=[],l=h()}}else S.push(A);if("stride"in b){const[A,O,x]=b.stride;o+=A-x}S.length>0?e.push(S):e.every(A=>A.length===0)&&(l=h(),e=[],S=[])}if(e.length>0){if(p&&a)throw new Error("There was an error while processing timestamps, we haven't found a timestamp as last token.");const b=this.findLongestCommonSequence(e),_=this.decode(b);l.text=_,f.push(l)}let g=Object.create(null);const m=f.map(b=>b.text).join("");if(a||u){for(let b=0;b<f.length;++b){const _=f[b];a||delete _.timestamp,u||delete _.language}g={chunks:f}}return[m,g]}findLongestCommonSequence(n){let a=n[0],u=a.length,c=[];for(let p=1;p<n.length;++p){const s=n[p];let h=0,f=[u,u,0,0];const l=s.length;for(let g=1;g<u+l;++g){const m=g/1e4,b=Math.max(0,u-g),_=Math.min(u,u+l-g),v=a.slice(b,_),w=Math.max(0,g-u),S=Math.min(l,g),A=s.slice(w,S);if(v.length!==A.length)throw new Error("There is a bug within whisper `decode_asr` function, please report it. Dropping to prevent bad inference.");const O=v.filter((I,N)=>I===A[N]).length,x=O/g+m;O>1&&x>h&&(h=x,f=[b,_,w,S])}const[o,t,e,r]=f,i=Math.floor((t+o)/2),d=Math.floor((r+e)/2);c.push(...a.slice(0,i)),a=s.slice(d),u=a.length}return c.push(...a),c}get_decoder_prompt_ids({language:n=null,task:a=null,no_timestamps:u=!0}={}){let c=[];if(n){n=n.toLowerCase();let p=WHISPER_TO_LANGUAGE_CODE_MAPPING.get(n);if(p===void 0)if(WHISPER_LANGUAGE_MAPPING.has(n))p=n;else{const f=n.length===2?WHISPER_LANGUAGE_MAPPING.keys():WHISPER_LANGUAGE_MAPPING.values();throw new Error(`Language "${n}" is not supported. Must be one of: ${JSON.stringify(f)}`)}let s=this.model.tokens_to_ids.get(`<|${p}|>`);if(s===void 0)throw new Error(`Unable to find language "${p}" in model vocabulary. Please report this issue at https://github.com/xenova/transformers.js/issues/new/choose.`);c.push(s)}else c.push(null);if(a){if(a=a.toLowerCase(),a!=="transcribe"&&a!=="translate")throw new Error(`Task "${a}" is not supported. Must be one of: ["transcribe", "translate"]`);let p=this.model.tokens_to_ids.get(`<|${a}|>`);if(p===void 0)throw new Error(`Unable to find task "${a}" in model vocabulary. Please report this issue at https://github.com/xenova/transformers.js/issues/new/choose.`);c.push(p)}else c.push(null);if(u){let p=this.model.tokens_to_ids.get("<|notimestamps|>");if(p===void 0)throw new Error('Unable to find "<|notimestamps|>" in model vocabulary. Please report this issue at https://github.com/xenova/transformers.js/issues/new/choose.');c.push(p)}return c.map((p,s)=>[s+1,p]).filter(p=>p[1]!==null)}}class CodeGenTokenizer extends PreTrainedTokenizer{}class CLIPTokenizer extends PreTrainedTokenizer{}class MarianTokenizer extends PreTrainedTokenizer{constructor(n,a){super(n,a),this.languageRegex=/^(>>\w+<<)\s*/g,this.supported_language_codes=this.model.vocab.filter(u=>this.languageRegex.test(u)),console.warn('WARNING: `MarianTokenizer` is not yet supported by Hugging Face\'s "fast" tokenizers library. Therefore, you may experience slightly inaccurate results.')}_encode_text(n){if(n===null)return null;let[a,...u]=n.trim().split(this.languageRegex);if(u.length===0)return super._encode_text(a);if(u.length===2){let[c,p]=u;return this.supported_language_codes.includes(c)||console.warn(`Unsupported language code "${c}" detected, which may lead to unexpected behavior. Should be one of: ${JSON.stringify(this.supported_language_codes)}`),mergeArrays([c],super._encode_text(p))}}}class CharTrie{constructor(){this.root=CharTrieNode.default()}extend(n){for(let a of n)this.push(a)}push(n){let a=this.root;for(let u of n){let c=a.children.get(u);c===void 0&&(c=CharTrieNode.default(),a.children.set(u,c)),a=c}a.isLeaf=!0}*commonPrefixSearch(n){let a=this.root,u="";for(let c=0;c<n.length&&a!==void 0;++c){const p=n[c];u+=p,a=a.children.get(p),a!==void 0&&a.isLeaf&&(yield u)}}}class CharTrieNode{constructor(n,a){this.isLeaf=n,this.children=a}static default(){return new CharTrieNode(!1,new Map)}}class TokenLattice{constructor(n,a,u){this.sentence=n,this.len=n.length,this.bosTokenId=a,this.eosTokenId=u,this.nodes=[],this.beginNodes=new Array(this.len+1),this.endNodes=new Array(this.len+1);for(let s=0;s<this.len+1;++s)this.beginNodes[s]=[],this.endNodes[s]=[];const c=new TokenLatticeNode(this.bosTokenId,0,0,0,0),p=new TokenLatticeNode(this.eosTokenId,1,this.len,0,0);this.nodes.push(c.clone()),this.nodes.push(p.clone()),this.beginNodes[this.len].push(p),this.endNodes[0].push(c)}insert(n,a,u,c){const p=this.nodes.length,s=new TokenLatticeNode(c,p,n,a,u);this.beginNodes[n].push(s),this.endNodes[n+a].push(s),this.nodes.push(s)}viterbi(){const n=this.len;let a=0;for(;a<=n;){if(this.beginNodes[a].length==0)return[];for(let h of this.beginNodes[a]){h.prev=null;let f=0,l=null;for(let o of this.endNodes[a]){const t=o.backtraceScore+h.score;(l===null||t>f)&&(l=o.clone(),f=t)}if(l!==null)h.prev=l,h.backtraceScore=f;else return[]}++a}const u=[],p=this.beginNodes[n][0].prev;if(p===null)return[];let s=p.clone();for(;s.prev!==null;)u.push(s.clone()),s=s.clone().prev.clone();return u.reverse(),u}piece(n){return this.sentence.slice(n.pos,n.pos+n.length)}tokens(){return this.viterbi().map(a=>this.piece(a))}tokenIds(){return this.viterbi().map(a=>a.tokenId)}}class TokenLatticeNode{constructor(n,a,u,c,p){this.tokenId=n,this.nodeId=a,this.pos=u,this.length=c,this.score=p,this.prev=null,this.backtraceScore=0}clone(){const n=new TokenLatticeNode(this.tokenId,this.nodeId,this.pos,this.length,this.score);return n.prev=this.prev,n.backtraceScore=this.backtraceScore,n}}class AutoTokenizer{static async from_pretrained(n,{quantized:a=!0,progress_callback:u=null,config:c=null,cache_dir:p=null,local_files_only:s=!1,revision:h="main"}={}){let[f,l]=await loadTokenizer(n,{quantized:a,progress_callback:u,config:c,cache_dir:p,local_files_only:s,revision:h}),o=l.tokenizer_class.replace(/Fast$/,""),t=this.TOKENIZER_CLASS_MAPPING[o];return t||(console.warn(`Unknown tokenizer class "${o}", attempting to construct from base class.`),t=PreTrainedTokenizer),new t(f,l)}}Se(AutoTokenizer,"TOKENIZER_CLASS_MAPPING",{T5Tokenizer,DistilBertTokenizer,BertTokenizer,MobileBertTokenizer,SqueezeBertTokenizer,AlbertTokenizer,GPT2Tokenizer,BartTokenizer,RobertaTokenizer,WhisperTokenizer,CodeGenTokenizer,CLIPTokenizer,MarianTokenizer,BloomTokenizer,NllbTokenizer,LlamaTokenizer,XLMRobertaTokenizer,MPNetTokenizer,FalconTokenizer,GPTNeoXTokenizer,PreTrainedTokenizer});async function loadConfig(y,n){return await getModelJSON(y,"config.json",!0,n)}class PretrainedConfig{constructor(n){this.model_type=null,this.is_encoder_decoder=!1,Object.assign(this,n)}static async from_pretrained(n,{progress_callback:a=null,config:u=null,cache_dir:c=null,local_files_only:p=!1,revision:s="main"}={}){let h=u??await loadConfig(n,{progress_callback:a,config:u,cache_dir:c,local_files_only:p,revision:s});return new this(h)}}class AutoConfig{static async from_pretrained(...n){return PretrainedConfig.from_pretrained(...n)}}class LogitsProcessorList extends Callable{constructor(){super(),this.processors=[]}push(n){this.processors.push(n)}extend(n){this.processors.push(...n)}_call(n,a){for(let u of a)this.processors.forEach(c=>c(n,u))}[Symbol.iterator](){return this.processors.values()}}class LogitsProcessor extends Callable{_call(n,a){throw Error("`_call` should be implemented in a subclass")}}class ForceTokensLogitsProcessor extends LogitsProcessor{constructor(n){super(),this.force_token_map=Object.fromEntries(n??[])}_call(n,a){let u=this.force_token_map[n.length];return exists(u)&&(a.data.fill(-1/0),a.data[u]=0),a}}class ForcedBOSTokenLogitsProcessor extends LogitsProcessor{constructor(n){super(),this.bos_token_id=n}_call(n,a){return n.length===1&&(a.data.fill(-1/0),a.data[this.bos_token_id]=0),a}}class ForcedEOSTokenLogitsProcessor extends LogitsProcessor{constructor(n,a){super(),this.max_length=n,this.forced_eos_token_id=a}_call(n,a){}}class SuppressTokensAtBeginLogitsProcessor extends LogitsProcessor{constructor(n,a){super(),this.begin_suppress_tokens=n,this.begin_index=a}_call(n,a){if(n.length===this.begin_index)for(let u of this.begin_suppress_tokens)a.data[u]=-1/0;return a}}class WhisperTimeStampLogitsProcessor extends LogitsProcessor{constructor(n){super(),this.eos_token_id=n.eos_token_id,this.no_timestamps_token_id=n.no_timestamps_token_id,this.timestamp_begin=this.no_timestamps_token_id+1,this.begin_index=(n.forced_decoder_ids||[]).length+2,n.forced_decoder_ids.slice(-1)[0][1]===this.no_timestamps_token_id&&(this.begin_index-=1),this.max_initial_timestamp_index=n.max_initial_timestamp_index}_call(n,a){if(a.data[this.no_timestamps_token_id]=-1/0,n.length===this.begin_index-1)return a.data.fill(-1/0),a.data[this.timestamp_begin]=0,a;const u=n.slice(this.begin_index),c=u.length>=1&&u[u.length-1]>=this.timestamp_begin,p=u.length<2||u[u.length-2]>=this.timestamp_begin;if(c&&(p?a.data.subarray(this.timestamp_begin).fill(-1/0):a.data.subarray(0,this.eos_token_id).fill(-1/0)),n.length===this.begin_index&&this.max_initial_timestamp_index!==null){const l=this.timestamp_begin+this.max_initial_timestamp_index;a.data.subarray(l+1).fill(-1/0)}const s=log_softmax(a.data),h=Math.log(s.subarray(this.timestamp_begin).map(Math.exp).reduce((l,o)=>l+o)),f=max(s.subarray(0,this.timestamp_begin))[0];return h>f&&a.data.subarray(0,this.timestamp_begin).fill(-1/0),a}}class NoRepeatNGramLogitsProcessor extends LogitsProcessor{constructor(n){super(),this.no_repeat_ngram_size=n}getNgrams(n){const a=n.length,u=[];for(let p=0;p<a+1-this.no_repeat_ngram_size;++p){const s=[];for(let h=0;h<this.no_repeat_ngram_size;++h)s.push(n[p+h]);u.push(s)}const c=new Map;for(const p of u){const s=p.slice(0,p.length-1),h=JSON.stringify(s),f=c.get(h)??[];f.push(p[p.length-1]),c.set(h,f)}return c}getGeneratedNgrams(n,a){const u=a.slice(a.length+1-this.no_repeat_ngram_size,a.length);return n.get(JSON.stringify(u))??[]}calcBannedNgramTokens(n){const a=[];if(n.length+1<this.no_repeat_ngram_size)return a;{const u=this.getNgrams(n);return this.getGeneratedNgrams(u,n)}}_call(n,a){const u=this.calcBannedNgramTokens(n);for(const c of u)a.data[c]=-1/0;return a}}class RepetitionPenaltyLogitsProcessor extends LogitsProcessor{constructor(n){super(),this.penalty=n}_call(n,a){for(const u of n)a.data[u]<0?a.data[u]*=this.penalty:a.data[u]/=this.penalty;return a}}class GenerationConfig{constructor(n={}){this.max_length=n.max_length??20,this.max_new_tokens=n.max_new_tokens??null,this.min_length=n.min_length??0,this.min_new_tokens=n.min_new_tokens??null,this.early_stopping=n.early_stopping??!1,this.max_time=n.max_time??null,this.do_sample=n.do_sample??!1,this.num_beams=n.num_beams??1,this.num_beam_groups=n.num_beam_groups??1,this.penalty_alpha=n.penalty_alpha??null,this.use_cache=n.use_cache??!0,this.temperature=n.temperature??1,this.top_k=n.top_k??50,this.top_p=n.top_p??1,this.typical_p=n.typical_p??1,this.epsilon_cutoff=n.epsilon_cutoff??0,this.eta_cutoff=n.eta_cutoff??0,this.diversity_penalty=n.diversity_penalty??0,this.repetition_penalty=n.repetition_penalty??1,this.encoder_repetition_penalty=n.encoder_repetition_penalty??1,this.length_penalty=n.length_penalty??1,this.no_repeat_ngram_size=n.no_repeat_ngram_size??0,this.bad_words_ids=n.bad_words_ids??null,this.force_words_ids=n.force_words_ids??null,this.renormalize_logits=n.renormalize_logits??!1,this.constraints=n.constraints??null,this.forced_bos_token_id=n.forced_bos_token_id??null,this.forced_eos_token_id=n.forced_eos_token_id??null,this.remove_invalid_values=n.remove_invalid_values??!1,this.exponential_decay_length_penalty=n.exponential_decay_length_penalty??null,this.suppress_tokens=n.suppress_tokens??null,this.begin_suppress_tokens=n.begin_suppress_tokens??null,this.forced_decoder_ids=n.forced_decoder_ids??null,this.num_return_sequences=n.num_return_sequences??1,this.output_attentions=n.output_attentions??!1,this.output_hidden_states=n.output_hidden_states??!1,this.output_scores=n.output_scores??!1,this.return_dict_in_generate=n.return_dict_in_generate??!1,this.pad_token_id=n.pad_token_id??null,this.bos_token_id=n.bos_token_id??null,this.eos_token_id=n.eos_token_id??null,this.encoder_no_repeat_ngram_size=n.encoder_no_repeat_ngram_size??0,this.decoder_start_token_id=n.decoder_start_token_id??null,this.generation_kwargs=n.generation_kwargs??{}}}class Sampler extends Callable{constructor(n){super(),this.generation_config=n}_call(n,a=-1){return this.sample(n,a)}sample(n,a){throw Error("sample should be implemented in subclasses.")}getLogits(n,a){let u=n.dims.at(-1),c=n.data;if(a===-1)c=c.slice(-u);else{let p=a*u;c=c.slice(p,p+u)}return this.generation_config.temperature>0&&(c=c.map(p=>p/this.generation_config.temperature)),c}randomSelect(n){let a=n.reduce((c,p)=>c+p,0),u=Math.random()*a;for(let c=0;c<n.length;++c)if(u-=n[c],u<=0)return c;return 0}static getSampler(n){if(n.do_sample)return new MultinomialSampler(n);if(n.num_beams>1)return new BeamSearchSampler(n);if(n.num_return_sequences>1)throw Error(`num_return_sequences has to be 1 when doing greedy search, but is ${n.num_return_sequences}.`);return new GreedySampler(n)}}class GreedySampler extends Sampler{sample(n,a=-1){let u=this.getLogits(n,a);return[[max(u)[1],0]]}}class MultinomialSampler extends Sampler{sample(n,a=-1){let u=n.dims.at(-1);this.generation_config.top_k>0&&(u=Math.min(this.generation_config.top_k,u));const c=this.getLogits(n,a),p=getTopItems(c,u),s=softmax(p.map(h=>h[1]));return Array.from({length:this.generation_config.num_beams},()=>{const h=this.randomSelect(s);return[p[h][0],Math.log(s[h])]})}}class BeamSearchSampler extends Sampler{sample(n,a=-1){let u=n.dims.at(-1);this.generation_config.top_k>0&&(u=Math.min(this.generation_config.top_k,u));const c=this.getLogits(n,a),p=getTopItems(c,u),s=softmax(p.map(h=>h[1]));return Array.from({length:this.generation_config.num_beams},(h,f)=>[p[f][0],Math.log(s[f])])}}const{InferenceSession,Tensor:ONNXTensor}=ONNX;class ModelType{}class EncoderOnlyModelType extends ModelType{}class EncoderDecoderModelType extends ModelType{}class Seq2SeqModelType extends EncoderDecoderModelType{}class DecoderOnlyModelType extends ModelType{}const MODEL_TYPE_MAPPING=new Map,MODEL_CLASS_MAPPING=new Map;async function forward(y,n){return MODEL_TYPE_MAPPING.get(y.constructor.name)===DecoderOnlyModelType?await decoderForward(y,n):await encoderForward(y,n)}async function constructSession(y,n,a){let u=`onnx/${n}${a.quantized?"_quantized":""}.onnx`,c=await getModelFile(y,u,!0,a);try{return await InferenceSession.create(c,{executionProviders})}catch(p){if(executionProviders.length===1&&executionProviders[0]==="wasm")throw p;return console.warn(p),console.warn("Something went wrong during model construction (most likely a missing operation). Using `wasm` as a fallback. "),await InferenceSession.create(c,{executionProviders:["wasm"]})}}async function validateInputs(y,n){const a={},u=[];for(let s of y.inputNames)n[s]===void 0?u.push(s):a[s]=n[s];if(u.length>0)throw new Error(`An error occurred during model execution: "Missing the following inputs: ${u.join(", ")}.`);const c=Object.keys(n).length,p=y.inputNames.length;if(c>p){let s=Object.keys(n).filter(h=>!y.inputNames.includes(h));console.warn(`WARNING: Too many inputs were provided (${c} > ${p}). The following inputs will be ignored: "${s.join(", ")}".`)}return a}async function sessionRun(y,n){const a=await validateInputs(y,n);try{let u=await y.run(a);return u=replaceTensors(u),u}catch(u){throw console.error(`An error occurred during model execution: "${u}".`),console.error("Inputs given to model:",a),u}}function replaceTensors(y){for(let n in y)y[n]instanceof ONNXTensor?y[n]=new Tensor(y[n]):typeof y[n]=="object"&&replaceTensors(y[n]);return y}function toI64Tensor(y){if(y instanceof Tensor)return y;if(y.length===0)throw Error("items must be non-empty");if(Array.isArray(y[0])){if(y.some(n=>n.length!==y[0].length))throw Error("Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' and/or 'truncation=True' to have batched tensors with the same length.");return new Tensor("int64",BigInt64Array.from(y.flat().map(n=>BigInt(n))),[y.length,y[0].length])}else return new Tensor("int64",BigInt64Array.from(y.map(n=>BigInt(n))),[1,y.length])}function prepareAttentionMask(y,n){let a=y.config.pad_token_id??null,u=y.config.eos_token_id??null;isIntegralNumber(u)&&(u=[u]);let c=n.indexOf(a)!==-1,p=u===null||!u.includes(a);if(c&&p){let s=BigInt64Array.from(n.data.map(h=>h!=a));return new Tensor("int64",s,n.dims)}else return new Tensor("int64",new BigInt64Array(n.data.length).fill(1n),n.dims)}function boolTensor(y){return new Tensor("bool",[y],[1])}async function seq2seqForward(y,n,{add_decoder_pkv:a=!0}={}){let{encoder_outputs:u,past_key_values:c}=n;u||(u=(await encoderForward(y,n)).last_hidden_state);let p={input_ids:n.decoder_input_ids,encoder_hidden_states:u,use_cache_branch:boolTensor(c!==null)};y.decoder_merged_session.inputNames.includes("encoder_attention_mask")&&(p.encoder_attention_mask=n.attention_mask),y.addPastKeyValues(p,c,a);const s=await sessionRun(y.decoder_merged_session,p);let h=s.logits;return c=y.getPastKeyValues(s,c),new Seq2SeqLMOutput({logits:h,past_key_values:c,encoder_outputs:u})}function seq2seqStartBeams(y,n,a,u=!0){let c=[],p=0,s=y.config.decoder_start_token_id;Array.isArray(s)||(s=[s]);for(let h of n){h.dims=[1,...h.dims];let f={inputs:h,encoder_outputs:null,past_key_values:null,output_token_ids:s,done:!1,score:0,id:p++};u&&(f.attention_mask=prepareAttentionMask(y,h)),c.push(f)}return c}async function seq2seqRunBeam(y,n,{input_name:a="input_ids"}={}){let u={[a]:n.inputs,decoder_input_ids:toI64Tensor(n.output_token_ids.slice(-1)),encoder_outputs:n.encoder_outputs,past_key_values:n.past_key_values};n.attention_mask&&(u.attention_mask=n.attention_mask);let c=await y.forward(u);return n.past_key_values=c.past_key_values,n.encoder_outputs=c.encoder_outputs,c}async function encoderForward(y,n){let a={};for(let u of y.session.inputNames)a[u]=n[u];return await sessionRun(y.session,a)}async function decoderForward(y,n){let a=n.past_key_values,u={input_ids:n.input_ids,attention_mask:n.attention_mask??prepareAttentionMask(y,n.input_ids),use_cache_branch:boolTensor(a!==null)};y.addPastKeyValues(u,a);let c=await sessionRun(y.session,u),p=c.logits;return a=y.getPastKeyValues(c,a),{logits:p,past_key_values:a}}function decoderStartBeams(y,n,a,u){let c=[],p=0;for(let s of n){let h=s.tolist().map(Number);s.dims=[1,...s.dims];let f;u?(f=u[p],f.dims=[1,...f.dims]):f=prepareAttentionMask(y,s);let l={input:s,model_input_ids:s,attention_mask:f,past_key_values:null,output_token_ids:h,num_output_tokens:a,done:!1,score:0,id:p++};c.push(l)}return c}async function decoderRunBeam(y,n){let a=new BigInt64Array(n.output_token_ids.length).fill(1n),u={input_ids:n.model_input_ids,attention_mask:new Tensor("int64",a,[1,a.length]),past_key_values:n.past_key_values},c=await y.forward(u);return n.past_key_values=c.past_key_values,c}function decoderUpdatebeam(y,n){y.output_token_ids=[...y.output_token_ids,n],y.model_input_ids=new Tensor("int64",[BigInt(n)],[1,1])}class PreTrainedModel extends Callable{constructor(n,a){super(),this.config=n,this.session=a}async dispose(){let n=[];for(let a of Object.keys(this)){let u=this[a];u instanceof InferenceSession&&n.push(u.handler.dispose())}return await Promise.all(n)}static async from_pretrained(n,{quantized:a=!0,progress_callback:u=null,config:c=null,cache_dir:p=null,local_files_only:s=!1,revision:h="main"}={}){let f={quantized:a,progress_callback:u,config:c,cache_dir:p,local_files_only:s,revision:h},l=MODEL_TYPE_MAPPING.get(this.name),o;if(l===DecoderOnlyModelType)o=await Promise.all([AutoConfig.from_pretrained(n,f),constructSession(n,"decoder_model_merged",f)]);else if(l===Seq2SeqModelType)o=await Promise.all([AutoConfig.from_pretrained(n,f),constructSession(n,"encoder_model",f),constructSession(n,"decoder_model_merged",f),getModelJSON(n,"generation_config.json",!1,f)]);else if(l===EncoderDecoderModelType)o=await Promise.all([AutoConfig.from_pretrained(n,f),constructSession(n,"encoder_model",f),constructSession(n,"decoder_model_merged",f)]);else if(l===EncoderOnlyModelType)o=await Promise.all([AutoConfig.from_pretrained(n,f),constructSession(n,"model",f)]);else throw console.warn("Malformed class definition.",this),Error(`Unable to load model: ${n}. Please report this bug at https://github.com/xenova/transformers.js/issues/new/choose.`);return new this(...o)}async _call(n){return await this.forward(n)}async forward(n){return await forward(this,n)}_get_logits_processor(n,a,u=null){const c=new LogitsProcessorList;if(n.repetition_penalty!==null&&n.repetition_penalty!==1&&c.push(new RepetitionPenaltyLogitsProcessor(n.repetition_penalty)),n.no_repeat_ngram_size!==null&&n.no_repeat_ngram_size>0&&c.push(new NoRepeatNGramLogitsProcessor(n.no_repeat_ngram_size)),n.forced_bos_token_id!==null&&c.push(new ForcedBOSTokenLogitsProcessor(n.forced_bos_token_id)),n.forced_eos_token_id!==null&&c.push(new ForcedEOSTokenLogitsProcessor(n.max_length,n.forced_eos_token_id)),n.begin_suppress_tokens!==null){let p=a>1||n.forced_bos_token_id===null?a:a+1;n.forced_decoder_ids!==null&&(p+=n.forced_decoder_ids[n.forced_decoder_ids.length-1][0]),c.push(new SuppressTokensAtBeginLogitsProcessor(n.begin_suppress_tokens,p))}return n.forced_decoder_ids!==null&&c.push(new ForceTokensLogitsProcessor(n.forced_decoder_ids)),u!==null&&c.extend(u),c}_get_generation_config(n){let a=new GenerationConfig;return"generation_config"in this&&Object.assign(a,this.generation_config),n!==null&&Object.assign(a,n),a}async generate(n,a=null,u=null,{inputs_attention_mask:c=null}={}){if(!(n instanceof Tensor)&&!isTypedArray(n)&&!Array.isArray(n))throw Error(`\`inputs\` must be a Tensor, TypedArray, or Array, but is "${n.constructor.name}".`);let p;if(this.config.is_encoder_decoder)p=0;else if(p=n instanceof Tensor?n.dims[0]:n.length,p===0)throw Error("Must supply a non-empty array of input token ids.");a=this._get_generation_config(a),u=u??new LogitsProcessorList,u=this._get_logits_processor(a,p,u);let s=1;const h=s+(a.max_new_tokens??1/0),f=Number.isInteger(a.max_length)&&(a.max_new_tokens??null)===null;let l=Sampler.getSampler(a),o=this.getStartBeams(n,s,c);for(;o.some(t=>!t.done)&&s<h;){let t=[];for(let e of o){if(e.done){t.push(e);continue}if(f&&e.output_token_ids.length>=a.max_length){e.done=!0,t.push(e);continue}let i=(await this.runBeam(e)).logits.slice(null,-1,null);u(e.output_token_ids,i);let d=l(i);for(let[g,m]of d){let b={...e};this.updateBeam(b,g),b.score+=m,g===this.config.eos_token_id&&(b.done=!0),t.push(b)}}++s,t=this.groupBeams(t).map(e=>e.sort((r,i)=>i.score-r.score).slice(0,a.num_beams)),o=t.flat(),a.callback_function&&a.callback_function(o)}return this.groupBeams(o).map(t=>a.num_return_sequences>1?t.slice(0,a.num_return_sequences).map(e=>e.output_token_ids):[t[0].output_token_ids]).flat()}groupBeams(n){const a=Object.create(null);for(const u of n)a[u.id]===void 0?a[u.id]=[u]:a[u.id].push(u);return Object.values(a)}getPastKeyValues(n,a){const u=Object.create(null);for(const c in n)if(c.startsWith("present")){let p=c.replace("present","past_key_values");a!==null&&c.includes("encoder")?u[p]=a[p]:u[p]=n[c]}return u}addPastKeyValues(n,a,u=!1){if(a)Object.assign(n,a);else if(u){let c=[1,this.num_encoder_heads,0,this.encoder_dim_kv];for(let s=0;s<this.num_encoder_layers;++s)n[`past_key_values.${s}.encoder.key`]=new Tensor("float32",[],c),n[`past_key_values.${s}.encoder.value`]=new Tensor("float32",[],c);let p=[1,this.num_decoder_heads,0,this.decoder_dim_kv];for(let s=0;s<this.num_decoder_layers;++s)n[`past_key_values.${s}.decoder.key`]=new Tensor("float32",[],p),n[`past_key_values.${s}.decoder.value`]=new Tensor("float32",[],p)}else{let c=[1,this.num_heads,0,this.dim_kv];for(let p=0;p<this.num_layers;++p)n[`past_key_values.${p}.key`]=new Tensor("float32",[],c),n[`past_key_values.${p}.value`]=new Tensor("float32",[],c)}}}class ModelOutput{}class BertPreTrainedModel extends PreTrainedModel{}class BertModel extends BertPreTrainedModel{}class BertForMaskedLM extends BertPreTrainedModel{async _call(n){return new MaskedLMOutput(await super._call(n))}}class BertForSequenceClassification extends BertPreTrainedModel{async _call(n){return new SequenceClassifierOutput(await super._call(n))}}class BertForTokenClassification extends BertPreTrainedModel{async _call(n){return new TokenClassifierOutput(await super._call(n))}}class BertForQuestionAnswering extends BertPreTrainedModel{async _call(n){return new QuestionAnsweringModelOutput(await super._call(n))}}class DistilBertPreTrainedModel extends PreTrainedModel{}class DistilBertModel extends DistilBertPreTrainedModel{}class DistilBertForSequenceClassification extends DistilBertPreTrainedModel{async _call(n){return new SequenceClassifierOutput(await super._call(n))}}class DistilBertForTokenClassification extends DistilBertPreTrainedModel{async _call(n){return new TokenClassifierOutput(await super._call(n))}}class DistilBertForQuestionAnswering extends DistilBertPreTrainedModel{async _call(n){return new QuestionAnsweringModelOutput(await super._call(n))}}class DistilBertForMaskedLM extends DistilBertPreTrainedModel{async _call(n){return new MaskedLMOutput(await super._call(n))}}class MobileBertPreTrainedModel extends PreTrainedModel{}class MobileBertModel extends MobileBertPreTrainedModel{}class MobileBertForMaskedLM extends MobileBertPreTrainedModel{async _call(n){return new MaskedLMOutput(await super._call(n))}}class MobileBertForSequenceClassification extends MobileBertPreTrainedModel{async _call(n){return new SequenceClassifierOutput(await super._call(n))}}class MobileBertForQuestionAnswering extends MobileBertPreTrainedModel{async _call(n){return new QuestionAnsweringModelOutput(await super._call(n))}}class SqueezeBertPreTrainedModel extends PreTrainedModel{}class SqueezeBertModel extends SqueezeBertPreTrainedModel{}class SqueezeBertForMaskedLM extends SqueezeBertPreTrainedModel{async _call(n){return new MaskedLMOutput(await super._call(n))}}class SqueezeBertForSequenceClassification extends SqueezeBertPreTrainedModel{async _call(n){return new SequenceClassifierOutput(await super._call(n))}}class SqueezeBertForQuestionAnswering extends SqueezeBertPreTrainedModel{async _call(n){return new QuestionAnsweringModelOutput(await super._call(n))}}class AlbertPreTrainedModel extends PreTrainedModel{}class AlbertModel extends AlbertPreTrainedModel{}class AlbertForSequenceClassification extends AlbertPreTrainedModel{async _call(n){return new SequenceClassifierOutput(await super._call(n))}}class AlbertForQuestionAnswering extends AlbertPreTrainedModel{async _call(n){return new QuestionAnsweringModelOutput(await super._call(n))}}class AlbertForMaskedLM extends AlbertPreTrainedModel{async _call(n){return new MaskedLMOutput(await super._call(n))}}class T5PreTrainedModel extends PreTrainedModel{}class T5Model extends T5PreTrainedModel{async generate(...n){throw Error("The current model class (T5Model) is not compatible with `.generate()`, as it doesn't have a language model head. Please use one of the following classes instead: {'T5ForConditionalGeneration'}")}}class T5ForConditionalGeneration extends T5PreTrainedModel{constructor(n,a,u,c){super(n,a),this.decoder_merged_session=u,this.generation_config=c,this.num_decoder_layers=this.config.num_decoder_layers,this.num_decoder_heads=this.config.num_heads,this.decoder_dim_kv=this.config.d_kv,this.num_encoder_layers=this.config.num_layers,this.num_encoder_heads=this.config.num_heads,this.encoder_dim_kv=this.config.d_kv}getStartBeams(n,a,...u){return seq2seqStartBeams(this,n)}async runBeam(n){return await seq2seqRunBeam(this,n)}updateBeam(n,a){n.output_token_ids=[...n.output_token_ids,a]}async forward(n){return await seq2seqForward(this,n)}}class MT5PreTrainedModel extends PreTrainedModel{}class MT5Model extends MT5PreTrainedModel{async generate(...n){throw Error("The current model class (MT5Model) is not compatible with `.generate()`, as it doesn't have a language model head. Please use one of the following classes instead: {'MT5ForConditionalGeneration'}")}}class MT5ForConditionalGeneration extends MT5PreTrainedModel{constructor(n,a,u,c){super(n,a),this.decoder_merged_session=u,this.generation_config=c,this.num_decoder_layers=this.config.num_decoder_layers,this.num_decoder_heads=this.config.num_heads,this.decoder_dim_kv=this.config.d_kv,this.num_encoder_layers=this.config.num_layers,this.num_encoder_heads=this.config.num_heads,this.encoder_dim_kv=this.config.d_kv}getStartBeams(n,a,...u){return seq2seqStartBeams(this,n)}async runBeam(n){return await seq2seqRunBeam(this,n)}updateBeam(n,a){n.output_token_ids=[...n.output_token_ids,a]}async forward(n){return await seq2seqForward(this,n)}}class BartPretrainedModel extends PreTrainedModel{}class BartModel extends BartPretrainedModel{async generate(...n){throw Error("The current model class (BartModel) is not compatible with `.generate()`, as it doesn't have a language model head. Please use one of the following classes instead: {'BartForConditionalGeneration'}")}}class BartForConditionalGeneration extends BartPretrainedModel{constructor(n,a,u,c){super(n,a),this.decoder_merged_session=u,this.generation_config=c,this.num_decoder_layers=this.config.decoder_layers,this.num_decoder_heads=this.config.decoder_attention_heads,this.decoder_dim_kv=this.config.d_model/this.num_decoder_heads,this.num_encoder_layers=this.config.encoder_layers,this.num_encoder_heads=this.config.encoder_attention_heads,this.encoder_dim_kv=this.config.d_model/this.num_encoder_heads}getStartBeams(n,a,...u){return seq2seqStartBeams(this,n)}async runBeam(n){return await seq2seqRunBeam(this,n)}updateBeam(n,a){n.output_token_ids=[...n.output_token_ids,a]}async forward(n){return await seq2seqForward(this,n)}}class BartForSequenceClassification extends BartPretrainedModel{async _call(n){return new SequenceClassifierOutput(await super._call(n))}}class RobertaPreTrainedModel extends PreTrainedModel{}class RobertaModel extends RobertaPreTrainedModel{}class RobertaForMaskedLM extends RobertaPreTrainedModel{async _call(n){return new MaskedLMOutput(await super._call(n))}}class RobertaForSequenceClassification extends RobertaPreTrainedModel{async _call(n){return new SequenceClassifierOutput(await super._call(n))}}class RobertaForTokenClassification extends RobertaPreTrainedModel{async _call(n){return new TokenClassifierOutput(await super._call(n))}}class RobertaForQuestionAnswering extends RobertaPreTrainedModel{async _call(n){return new QuestionAnsweringModelOutput(await super._call(n))}}class XLMRobertaPreTrainedModel extends PreTrainedModel{}class XLMRobertaModel extends XLMRobertaPreTrainedModel{}class XLMRobertaForMaskedLM extends XLMRobertaPreTrainedModel{async _call(n){return new MaskedLMOutput(await super._call(n))}}class XLMRobertaForSequenceClassification extends XLMRobertaPreTrainedModel{async _call(n){return new SequenceClassifierOutput(await super._call(n))}}class XLMRobertaForTokenClassification extends XLMRobertaPreTrainedModel{async _call(n){return new TokenClassifierOutput(await super._call(n))}}class XLMRobertaForQuestionAnswering extends XLMRobertaPreTrainedModel{async _call(n){return new QuestionAnsweringModelOutput(await super._call(n))}}class WhisperPreTrainedModel extends PreTrainedModel{}class WhisperModel extends WhisperPreTrainedModel{async generate(...n){throw Error("The current model class (WhisperModel) is not compatible with `.generate()`, as it doesn't have a language model head. Please use one of the following classes instead: {'WhisperForConditionalGeneration'}")}}class WhisperForConditionalGeneration extends WhisperPreTrainedModel{constructor(n,a,u,c){super(n,a),this.decoder_merged_session=u,this.generation_config=c,this.num_decoder_layers=this.config.decoder_layers,this.num_decoder_heads=this.config.decoder_attention_heads,this.decoder_dim_kv=this.config.d_model/this.num_decoder_heads,this.num_encoder_layers=this.config.encoder_layers,this.num_encoder_heads=this.config.encoder_attention_heads,this.encoder_dim_kv=this.config.d_model/this.num_encoder_heads}async generate(n,a=null,u=null){return a=this._get_generation_config(a),a.return_timestamps??(a.return_timestamps=!1),a.return_timestamps&&(u=[new WhisperTimeStampLogitsProcessor(a)]),super.generate(n,a,u)}getStartBeams(n,a,...u){return seq2seqStartBeams(this,n,a,!1)}async runBeam(n){return await seq2seqRunBeam(this,n,{input_name:"input_features"})}updateBeam(n,a){n.output_token_ids=[...n.output_token_ids,a]}async forward(n){return await seq2seqForward(this,n)}}class VisionEncoderDecoderModel extends PreTrainedModel{constructor(n,a,u){super(n,a),this.decoder_merged_session=u,this.num_layers=this.config.decoder.n_layer,this.num_heads=this.config.decoder.n_head,this.dim_kv=this.config.decoder.n_embd/this.num_heads}getStartBeams(n,a,...u){return seq2seqStartBeams(this,n)}async runBeam(n){return seq2seqRunBeam(this,n,{input_name:"pixel_values"})}updateBeam(n,a){n.output_token_ids=[...n.output_token_ids,a]}async forward(n){return await seq2seqForward(this,n,{add_decoder_pkv:!1})}}class CLIPPreTrainedModel extends PreTrainedModel{}class CLIPModel extends CLIPPreTrainedModel{}class GPT2PreTrainedModel extends PreTrainedModel{constructor(n,a){super(n,a),this.config.pad_token_id=this.config.eos_token_id,this.num_heads=this.config.n_head,this.num_layers=this.config.n_layer,this.dim_kv=this.config.n_embd/this.num_heads}}class GPT2Model extends GPT2PreTrainedModel{async generate(...n){throw Error("The current model class (GPT2Model) is not compatible with `.generate()`, as it doesn't have a language model head. Please use one of the following classes instead: {'GPT2LMHeadModel'}")}}class GPT2LMHeadModel extends GPT2PreTrainedModel{getStartBeams(n,a,u){return decoderStartBeams(this,n,a,u)}async runBeam(n){return await decoderRunBeam(this,n)}updateBeam(n,a){return decoderUpdatebeam(n,a)}async forward(n){return await decoderForward(this,n)}}class GPTNeoPreTrainedModel extends PreTrainedModel{constructor(n,a){super(n,a),this.config.pad_token_id=this.config.eos_token_id,this.num_heads=this.config.num_heads,this.num_layers=this.config.num_layers,this.dim_kv=this.config.hidden_size/this.num_heads}}class GPTNeoModel extends GPTNeoPreTrainedModel{async generate(...n){throw Error("The current model class (GPTNeoModel) is not compatible with `.generate()`, as it doesn't have a language model head. Please use one of the following classes instead: {'GPTNeoForCausalLM'}")}}class GPTNeoForCausalLM extends GPTNeoPreTrainedModel{getStartBeams(n,a,u){return decoderStartBeams(this,n,a,u)}async runBeam(n){return await decoderRunBeam(this,n)}updateBeam(n,a){return decoderUpdatebeam(n,a)}async forward(n){return await decoderForward(this,n)}}class CodeGenPreTrainedModel extends PreTrainedModel{constructor(n,a){super(n,a),this.config.pad_token_id=this.config.eos_token_id,this.num_heads=this.config.n_head,this.num_layers=this.config.n_layer,this.dim_kv=this.config.n_embd/this.num_heads}}class CodeGenModel extends CodeGenPreTrainedModel{async generate(...n){throw Error("The current model class (CodeGenModel) is not compatible with `.generate()`, as it doesn't have a language model head. Please use one of the following classes instead: {'CodeGenForCausalLM'}")}}class CodeGenForCausalLM extends CodeGenPreTrainedModel{getStartBeams(n,a,u){return decoderStartBeams(this,n,a,u)}async runBeam(n){return await decoderRunBeam(this,n)}updateBeam(n,a){return decoderUpdatebeam(n,a)}async forward(n){return await decoderForward(this,n)}}class ViTPreTrainedModel extends PreTrainedModel{}class ViTForImageClassification extends ViTPreTrainedModel{async _call(n){return new SequenceClassifierOutput(await super._call(n))}}class MobileViTPreTrainedModel extends PreTrainedModel{}class MobileViTForImageClassification extends MobileViTPreTrainedModel{async _call(n){return new SequenceClassifierOutput(await super._call(n))}}class DetrPreTrainedModel extends PreTrainedModel{}class DetrForObjectDetection extends DetrPreTrainedModel{async _call(n){return new DetrObjectDetectionOutput(await super._call(n))}}class DetrForSegmentation extends DetrPreTrainedModel{async _call(n){return new DetrSegmentationOutput(await super._call(n))}}class DetrObjectDetectionOutput extends ModelOutput{constructor({logits:n,pred_boxes:a}){super(),this.logits=n,this.pred_boxes=a}}class DetrSegmentationOutput extends ModelOutput{constructor({logits:n,pred_boxes:a,pred_masks:u}){super(),this.logits=n,this.pred_boxes=a,this.pred_masks=u}}class SamPreTrainedModel extends PreTrainedModel{}class SamModel extends SamPreTrainedModel{async _call(n){return new SamImageSegmentationOutput(await super._call(n))}}class SamImageSegmentationOutput extends ModelOutput{constructor({iou_scores:n,pred_masks:a}){super(),this.iou_scores=n,this.pred_masks=a}}class MarianPreTrainedModel extends PreTrainedModel{}class MarianModel extends MarianPreTrainedModel{async generate(...n){throw Error("The current model class (MarianModel) is not compatible with `.generate()`, as it doesn't have a language model head. Please use one of the following classes instead: {'MarianMTModel'}")}}class MarianMTModel extends MarianPreTrainedModel{constructor(n,a,u,c){super(n,a),this.decoder_merged_session=u,this.generation_config=c,this.num_decoder_layers=this.config.decoder_layers,this.num_decoder_heads=this.config.decoder_attention_heads,this.decoder_dim_kv=this.config.d_model/this.num_decoder_heads,this.num_encoder_layers=this.config.encoder_layers,this.num_encoder_heads=this.config.encoder_attention_heads,this.encoder_dim_kv=this.config.d_model/this.num_encoder_heads}getStartBeams(n,a,...u){return seq2seqStartBeams(this,n)}async runBeam(n){return await seq2seqRunBeam(this,n)}updateBeam(n,a){n.output_token_ids=[...n.output_token_ids,a]}async forward(n){return await seq2seqForward(this,n)}}class M2M100PreTrainedModel extends PreTrainedModel{}class M2M100Model extends M2M100PreTrainedModel{async generate(...n){throw Error("The current model class (M2M100Model) is not compatible with `.generate()`, as it doesn't have a language model head. Please use one of the following classes instead: {'M2M100ForConditionalGeneration'}")}}class M2M100ForConditionalGeneration extends M2M100PreTrainedModel{constructor(n,a,u,c){super(n,a),this.decoder_merged_session=u,this.generation_config=c,this.num_decoder_layers=this.config.decoder_layers,this.num_decoder_heads=this.config.decoder_attention_heads,this.decoder_dim_kv=this.config.d_model/this.num_decoder_heads,this.num_encoder_layers=this.config.encoder_layers,this.num_encoder_heads=this.config.encoder_attention_heads,this.encoder_dim_kv=this.config.d_model/this.num_encoder_heads}getStartBeams(n,a,...u){return seq2seqStartBeams(this,n)}async runBeam(n){return await seq2seqRunBeam(this,n)}updateBeam(n,a){n.output_token_ids=[...n.output_token_ids,a]}async forward(n){return await seq2seqForward(this,n)}}class PretrainedMixin{static async from_pretrained(n,{quantized:a=!0,progress_callback:u=null,config:c=null,cache_dir:p=null,local_files_only:s=!1,revision:h="main"}={}){let f={quantized:a,progress_callback:u,config:c,cache_dir:p,local_files_only:s,revision:h};if(c=await AutoConfig.from_pretrained(n,f),!this.MODEL_CLASS_MAPPINGS)throw new Error("`MODEL_CLASS_MAPPINGS` not implemented for this type of `AutoClass`: "+this.name);let l;for(let o of this.MODEL_CLASS_MAPPINGS)if(l=o.get(c.model_type),!!l)return await l.from_pretrained(n,f);if(this.BASE_IF_FAIL)return console.warn(`Unknown model class "${c.model_type}", attempting to construct from base class.`),await PreTrainedModel.from_pretrained(n,f);throw Error(`Unsupported model type: ${c.model_type}`)}}Se(PretrainedMixin,"MODEL_CLASS_MAPPINGS",null),Se(PretrainedMixin,"BASE_IF_FAIL",!1);const MODEL_MAPPING_NAMES_ENCODER_ONLY=new Map([["bert",BertModel],["albert",AlbertModel],["distilbert",DistilBertModel],["roberta",RobertaModel],["xlm-roberta",XLMRobertaModel],["clip",CLIPModel],["mobilebert",MobileBertModel],["squeezebert",SqueezeBertModel],["sam",SamModel]]),MODEL_MAPPING_NAMES_ENCODER_DECODER=new Map([["t5",T5Model],["mt5",MT5Model],["bart",BartModel],["marian",MarianModel],["whisper",WhisperModel],["m2m_100",M2M100Model]]),MODEL_MAPPING_NAMES_DECODER_ONLY=new Map([["gpt2",GPT2Model],["gpt_neo",GPTNeoModel],["codegen",CodeGenModel]]),MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING_NAMES=new Map([["bert",BertForSequenceClassification],["albert",AlbertForSequenceClassification],["distilbert",DistilBertForSequenceClassification],["roberta",RobertaForSequenceClassification],["xlm-roberta",XLMRobertaForSequenceClassification],["bart",BartForSequenceClassification],["mobilebert",MobileBertForSequenceClassification],["squeezebert",SqueezeBertForSequenceClassification]]),MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING_NAMES=new Map([["bert",BertForTokenClassification],["distilbert",DistilBertForTokenClassification],["roberta",RobertaForTokenClassification],["xlm-roberta",XLMRobertaForTokenClassification]]),MODEL_FOR_SEQ_2_SEQ_MAPPING_NAMES=new Map([["t5",T5ForConditionalGeneration],["mt5",MT5ForConditionalGeneration],["bart",BartForConditionalGeneration],["whisper",WhisperForConditionalGeneration],["marian",MarianMTModel],["m2m_100",M2M100ForConditionalGeneration]]),MODEL_WITH_LM_HEAD_MAPPING_NAMES=new Map([["gpt2",GPT2LMHeadModel],["gpt_neo",GPTNeoForCausalLM],["codegen",CodeGenForCausalLM]]),MODEL_FOR_MASKED_LM_MAPPING_NAMES=new Map([["bert",BertForMaskedLM],["albert",AlbertForMaskedLM],["distilbert",DistilBertForMaskedLM],["roberta",RobertaForMaskedLM],["xlm-roberta",XLMRobertaForMaskedLM],["mobilebert",MobileBertForMaskedLM],["squeezebert",SqueezeBertForMaskedLM]]),MODEL_FOR_QUESTION_ANSWERING_MAPPING_NAMES=new Map([["bert",BertForQuestionAnswering],["albert",AlbertForQuestionAnswering],["distilbert",DistilBertForQuestionAnswering],["roberta",RobertaForQuestionAnswering],["xlm-roberta",XLMRobertaForQuestionAnswering],["mobilebert",MobileBertForQuestionAnswering],["squeezebert",SqueezeBertForQuestionAnswering]]),MODEL_FOR_VISION_2_SEQ_MAPPING_NAMES=new Map([["vision-encoder-decoder",VisionEncoderDecoderModel]]),MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING_NAMES=new Map([["vit",ViTForImageClassification],["mobilevit",MobileViTForImageClassification]]),MODEL_FOR_OBJECT_DETECTION_MAPPING_NAMES=new Map([["detr",DetrForObjectDetection]]),MODEL_FOR_IMAGE_SEGMENTATION_MAPPING_NAMES=new Map([["detr",DetrForSegmentation]]),MODEL_FOR_MASK_GENERATION_MAPPING_NAMES=new Map([["sam",SamModel]]),MODEL_CLASS_TYPE_MAPPING=[[MODEL_MAPPING_NAMES_ENCODER_ONLY,EncoderOnlyModelType],[MODEL_MAPPING_NAMES_ENCODER_DECODER,EncoderDecoderModelType],[MODEL_MAPPING_NAMES_DECODER_ONLY,DecoderOnlyModelType],[MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING_NAMES,EncoderOnlyModelType],[MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING_NAMES,EncoderOnlyModelType],[MODEL_FOR_SEQ_2_SEQ_MAPPING_NAMES,Seq2SeqModelType],[MODEL_WITH_LM_HEAD_MAPPING_NAMES,DecoderOnlyModelType],[MODEL_FOR_MASKED_LM_MAPPING_NAMES,EncoderOnlyModelType],[MODEL_FOR_QUESTION_ANSWERING_MAPPING_NAMES,EncoderOnlyModelType],[MODEL_FOR_VISION_2_SEQ_MAPPING_NAMES,EncoderDecoderModelType],[MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING_NAMES,EncoderOnlyModelType],[MODEL_FOR_IMAGE_SEGMENTATION_MAPPING_NAMES,EncoderOnlyModelType],[MODEL_FOR_OBJECT_DETECTION_MAPPING_NAMES,EncoderOnlyModelType],[MODEL_FOR_MASK_GENERATION_MAPPING_NAMES,EncoderOnlyModelType]];for(let[y,n]of MODEL_CLASS_TYPE_MAPPING)for(let[a,u]of y.entries())MODEL_TYPE_MAPPING.set(u.name,n),MODEL_CLASS_MAPPING.set(u.name,a);class AutoModel extends PretrainedMixin{}Se(AutoModel,"MODEL_CLASS_MAPPINGS",[MODEL_MAPPING_NAMES_ENCODER_ONLY,MODEL_MAPPING_NAMES_ENCODER_DECODER,MODEL_MAPPING_NAMES_DECODER_ONLY]),Se(AutoModel,"BASE_IF_FAIL",!0);class AutoModelForSequenceClassification extends PretrainedMixin{}Se(AutoModelForSequenceClassification,"MODEL_CLASS_MAPPINGS",[MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING_NAMES]);class AutoModelForTokenClassification extends PretrainedMixin{}Se(AutoModelForTokenClassification,"MODEL_CLASS_MAPPINGS",[MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING_NAMES]);class AutoModelForSeq2SeqLM extends PretrainedMixin{}Se(AutoModelForSeq2SeqLM,"MODEL_CLASS_MAPPINGS",[MODEL_FOR_SEQ_2_SEQ_MAPPING_NAMES]);class AutoModelForCausalLM extends PretrainedMixin{}Se(AutoModelForCausalLM,"MODEL_CLASS_MAPPINGS",[MODEL_WITH_LM_HEAD_MAPPING_NAMES]);class AutoModelForMaskedLM extends PretrainedMixin{}Se(AutoModelForMaskedLM,"MODEL_CLASS_MAPPINGS",[MODEL_FOR_MASKED_LM_MAPPING_NAMES]);class AutoModelForQuestionAnswering extends PretrainedMixin{}Se(AutoModelForQuestionAnswering,"MODEL_CLASS_MAPPINGS",[MODEL_FOR_QUESTION_ANSWERING_MAPPING_NAMES]);class AutoModelForVision2Seq extends PretrainedMixin{}Se(AutoModelForVision2Seq,"MODEL_CLASS_MAPPINGS",[MODEL_FOR_VISION_2_SEQ_MAPPING_NAMES]);class AutoModelForImageClassification extends PretrainedMixin{}Se(AutoModelForImageClassification,"MODEL_CLASS_MAPPINGS",[MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING_NAMES]);class AutoModelForImageSegmentation extends PretrainedMixin{}Se(AutoModelForImageSegmentation,"MODEL_CLASS_MAPPINGS",[MODEL_FOR_IMAGE_SEGMENTATION_MAPPING_NAMES]);class AutoModelForObjectDetection extends PretrainedMixin{}Se(AutoModelForObjectDetection,"MODEL_CLASS_MAPPINGS",[MODEL_FOR_OBJECT_DETECTION_MAPPING_NAMES]);class Seq2SeqLMOutput extends ModelOutput{constructor({logits:n,past_key_values:a,encoder_outputs:u}){super(),this.logits=n,this.past_key_values=a,this.encoder_outputs=u}}class SequenceClassifierOutput extends ModelOutput{constructor({logits:n}){super(),this.logits=n}}class TokenClassifierOutput extends ModelOutput{constructor({logits:n}){super(),this.logits=n}}class MaskedLMOutput extends ModelOutput{constructor({logits:n}){super(),this.logits=n}}class QuestionAnsweringModelOutput extends ModelOutput{constructor({start_logits:n,end_logits:a}){super(),this.start_logits=n,this.end_logits=a}}const BROWSER_ENV=typeof self<"u";let createCanvasFunction,ImageDataClass,loadImageFunction;if(BROWSER_ENV)createCanvasFunction=(y,n)=>{if(!self.OffscreenCanvas)throw new Error("OffscreenCanvas not supported by this browser.");return new self.OffscreenCanvas(y,n)},loadImageFunction=self.createImageBitmap,ImageDataClass=self.ImageData;else if(fs)loadImageFunction=async y=>{const a=(await y.metadata()).channels;let{data:u,info:c}=await y.raw().toBuffer({resolveWithObject:!0});const p=new RawImage(new Uint8ClampedArray(u),c.width,c.height,c.channels);return a!==void 0&&a!==c.channels&&p.convert(a),p};else throw new Error("Unable to load image processing library.");const RESAMPLING_MAPPING={0:"nearest",1:"lanczos",2:"bilinear",3:"bicubic",4:"box",5:"hamming"};class RawImage{constructor(n,a,u,c){this._update(n,a,u,c)}static async read(n){if(n instanceof RawImage)return n;if(isString(n)||n instanceof URL)return await this.fromURL(n);throw new Error(`Unsupported input type: ${typeof n}`)}static async fromURL(n){let u=await(await getFile(n)).blob();return this.fromBlob(u)}static async fromBlob(n){if(BROWSER_ENV){let a=await loadImageFunction(n);const u=createCanvasFunction(a.width,a.height).getContext("2d");return u.drawImage(a,0,0),new this(u.getImageData(0,0,a.width,a.height).data,a.width,a.height,4)}else{let a=fs(await n.arrayBuffer());return await loadImageFunction(a)}}grayscale(){if(this.channels===1)return this;let n=new Uint8ClampedArray(this.width*this.height*1);switch(this.channels){case 3:case 4:for(let a=0,u=0;a<this.data.length;a+=this.channels){const c=this.data[a],p=this.data[a+1],s=this.data[a+2];n[u++]=Math.round(.2989*c+.587*p+.114*s)}break;default:throw new Error(`Conversion failed due to unsupported number of channels: ${this.channels}`)}return this._update(n,this.width,this.height,1)}rgb(){if(this.channels===3)return this;let n=new Uint8ClampedArray(this.width*this.height*3);switch(this.channels){case 1:for(let a=0,u=0;a<this.data.length;++a)n[u++]=this.data[a],n[u++]=this.data[a],n[u++]=this.data[a];break;case 4:for(let a=0,u=0;a<this.data.length;a+=4)n[u++]=this.data[a],n[u++]=this.data[a+1],n[u++]=this.data[a+2];break;default:throw new Error(`Conversion failed due to unsupported number of channels: ${this.channels}`)}return this._update(n,this.width,this.height,3)}rgba(){if(this.channels===4)return this;let n=new Uint8ClampedArray(this.width*this.height*4);switch(this.channels){case 1:for(let a=0,u=0;a<this.data.length;++a)n[u++]=this.data[a],n[u++]=this.data[a],n[u++]=this.data[a],n[u++]=255;break;case 3:for(let a=0,u=0;a<this.data.length;a+=3)n[u++]=this.data[a],n[u++]=this.data[a+1],n[u++]=this.data[a+2],n[u++]=255;break;default:throw new Error(`Conversion failed due to unsupported number of channels: ${this.channels}`)}return this._update(n,this.width,this.height,4)}async resize(n,a,{resample:u=2}={}){let c=RESAMPLING_MAPPING[u]??u;if(BROWSER_ENV){let p=this.channels,s=this.toCanvas();const h=createCanvasFunction(n,a).getContext("2d");return h.drawImage(s,0,0,n,a),new RawImage(h.getImageData(0,0,n,a).data,n,a,4).convert(p)}else{let p=fs(this.data,{raw:{width:this.width,height:this.height,channels:this.channels}});switch(c){case"box":case"hamming":(c==="box"||c==="hamming")&&(console.warn(`Resampling method ${c} is not yet supported. Using bilinear instead.`),c="bilinear");case"nearest":case"bilinear":case"bicubic":p=p.affine([n/this.width,0,0,a/this.height],{interpolator:c});break;case"lanczos":p=p.resize({width:n,height:a,fit:"fill",kernel:"lanczos3"});break;default:throw new Error(`Resampling method ${c} is not supported.`)}return await loadImageFunction(p)}}async pad([n,a,u,c]){if(n=Math.max(n,0),a=Math.max(a,0),u=Math.max(u,0),c=Math.max(c,0),n===0&&a===0&&u===0&&c===0)return this;if(BROWSER_ENV){let p=this.channels,s=this.toCanvas(),h=this.width+n+a,f=this.height+u+c;const l=createCanvasFunction(h,f).getContext("2d");return l.drawImage(s,0,0,this.width,this.height,n,u,h,f),new RawImage(l.getImageData(0,0,h,f).data,h,f,4).convert(p)}else{let p=fs(this.data,{raw:{width:this.width,height:this.height,channels:this.channels}}).extend({left:n,right:a,top:u,bottom:c});return await loadImageFunction(p)}}async center_crop(n,a){if(this.width===n&&this.height===a)return this;let u=(this.width-n)/2,c=(this.height-a)/2;if(BROWSER_ENV){let p=this.channels,s=this.toCanvas();const h=createCanvasFunction(n,a).getContext("2d");let f=0,l=0,o=0,t=0;return u>=0?f=u:o=-u,c>=0?l=c:t=-c,h.drawImage(s,f,l,n,a,o,t,n,a),new RawImage(h.getImageData(0,0,n,a).data,n,a,4).convert(p)}else{let p=fs(this.data,{raw:{width:this.width,height:this.height,channels:this.channels}});if(u>=0&&c>=0)p=p.extract({left:Math.floor(u),top:Math.floor(c),width:n,height:a});else if(u<=0&&c<=0){let s=Math.floor(-c),h=Math.floor(-u);p=p.extend({top:s,left:h,right:n-this.width-h,bottom:a-this.height-s})}else{let s=[0,0],h=0;c<0?(s[0]=Math.floor(-c),s[1]=a-this.height-s[0]):h=Math.floor(c);let f=[0,0],l=0;u<0?(f[0]=Math.floor(-u),f[1]=n-this.width-f[0]):l=Math.floor(u),p=p.extend({top:s[0],bottom:s[1],left:f[0],right:f[1]}).extract({left:l,top:h,width:n,height:a})}return await loadImageFunction(p)}}toCanvas(){let n=this.clone().rgba(),a=createCanvasFunction(n.width,n.height),u=new ImageDataClass(n.data,n.width,n.height);return a.getContext("2d").putImageData(u,0,0),a}_update(n,a,u,c=null){return this.data=n,this.width=a,this.height=u,c!==null&&(this.channels=c),this}clone(){return new RawImage(this.data.slice(),this.width,this.height,this.channels)}convert(n){if(this.channels===n)return this;switch(n){case 1:this.grayscale();break;case 3:this.rgb();break;case 4:this.rgba();break;default:throw new Error(`Conversion failed due to unsupported number of channels: ${this.channels}`)}return this}save(n,a="image/png"){if(!env.useFS)throw new Error("Unable to save the image because filesystem is disabled in this environment.");const c=this.toCanvas().toBuffer(a);fs.writeFileSync(n,c)}}async function read_audio(y,n){if(typeof AudioContext>"u")throw Error("Unable to load audio from path/URL since `AudioContext` is not available in your environment. Instead, audio data should be passed directly to the pipeline/processor. For more information and some example code, see https://huggingface.co/docs/transformers.js/tutorials/node-audio-processing.");const a=await(await getFile(y)).arrayBuffer(),u=new AudioContext({sampleRate:n});typeof n>"u"&&console.warn(`No sampling rate provided, using default of ${u.sampleRate}Hz.`);const c=await u.decodeAudioData(a);let p;if(c.numberOfChannels===2){const s=Math.sqrt(2);let h=c.getChannelData(0),f=c.getChannelData(1);p=new Float32Array(h.length);for(let l=0;l<c.length;++l)p[l]=s*(h[l]+f[l])/2}else p=c.getChannelData(0);return p}function getMelFilters(y,n,a=128){a=Math.floor(a);const u=Math.floor(1+n/2),c=new Array(a),p=rfftfreq(n,1/y),s=0,l=(45.245640471924965-s)/(a+1),o=0,t=200/3,e=new Array(a+2),r=1e3,i=(r-o)/t,d=Math.log(6.4)/27,g=new Array(e.length);for(let b=0;b<e.length;++b){const _=b*l+s;_>=i?e[b]=r*Math.exp(d*(_-i)):e[b]=o+t*_,g[b]=p.map(v=>e[b]-v)}const m=e.slice(1).map((b,_)=>1/(b-e[_]));for(let b=0;b<c.length;++b){c[b]=new Array(u);const _=m[b],v=m[b+1],w=g[b],S=g[b+2],A=2/(e[b+2]-e[b]);for(let O=0;O<c[b].length;++O){const x=-w[O]*_,I=S[O]*v;c[b][O]=Math.max(0,Math.min(x,I))*A}}return c}class FeatureExtractor extends Callable{constructor(n){super(),this.config=n}}class ImageFeatureExtractor extends FeatureExtractor{constructor(n){super(n),this.image_mean=this.config.image_mean,this.image_std=this.config.image_std,this.resample=this.config.resample??2,this.do_rescale=this.config.do_rescale??!0,this.rescale_factor=this.config.rescale_factor??1/255,this.do_normalize=this.config.do_normalize,this.do_resize=this.config.do_resize,this.size=this.config.size,this.do_center_crop=this.config.do_center_crop,this.crop_size=this.config.crop_size,this.do_convert_rgb=this.config.do_convert_rgb??!0,this.pad_size=this.config.pad_size,this.do_pad=(this.config.do_pad??!1)&&this.pad_size}async preprocess(n){this.do_convert_rgb&&(n=n.rgb());const a=n.width,u=n.height;if(this.do_resize){let l,o;if(Number.isInteger(this.size)?(l=this.size,o=this.config.max_size??l):(l=this.size.shortest_edge,o=this.size.longest_edge),l!==void 0||o!==void 0){const t=l===void 0?1:Math.max(l/a,l/u),e=a*t,r=u*t,i=o===void 0?1:Math.min(o/e,o/r);n=await n.resize(Math.floor(e*i),Math.floor(r*i),{resample:this.resample})}else if(this.size.width!==void 0&&this.size.height!==void 0)n=await n.resize(this.size.width,this.size.height,{resample:this.resample});else throw new Error(`Could not resize image due to unsupported \`this.size\` option in config: ${JSON.stringify(this.size)}`)}if(this.do_center_crop){let l,o;Number.isInteger(this.crop_size)?(l=this.crop_size,o=this.crop_size):(l=this.crop_size.width,o=this.crop_size.height),n=await n.center_crop(l,o)}let c=[n.height,n.width];if(this.do_pad){let l=0,o=this.pad_size.width-n.width,t=0,e=this.pad_size.height-n.height;n=await n.pad([l,o,t,e])}const p=Float32Array.from(n.data);if(this.do_rescale)for(let l=0;l<p.length;++l)p[l]=this.rescale_factor*p[l];if(this.do_normalize){let l=this.image_mean;Array.isArray(this.image_mean)||(l=new Array(n.channels).fill(l));let o=this.image_std;if(Array.isArray(this.image_std)||(o=new Array(n.channels).fill(l)),l.length!==n.channels||o.length!==n.channels)throw new Error(`When set to arrays, the length of \`image_mean\` (${l.length}) and \`image_std\` (${o.length}) must match the number of channels in the image (${n.channels}).`);for(let t=0;t<p.length;t+=n.channels)for(let e=0;e<n.channels;++e)p[t+e]=(p[t+e]-this.image_mean[e])/this.image_std[e]}let s=[n.height,n.width,n.channels],h=new Tensor("float32",p,s),f=transpose(h,[2,0,1]);return{original_size:[u,a],reshaped_input_size:c,pixel_values:f}}async _call(n){Array.isArray(n)||(n=[n]);let a=await Promise.all(n.map(c=>this.preprocess(c)));return a.forEach(c=>c.pixel_values.dims=[1,...c.pixel_values.dims]),{pixel_values:cat(a.map(c=>c.pixel_values)),original_sizes:a.map(c=>c.original_size),reshaped_input_sizes:a.map(c=>c.reshaped_input_size)}}}class ViTFeatureExtractor extends ImageFeatureExtractor{}class MobileViTFeatureExtractor extends ImageFeatureExtractor{}class DetrFeatureExtractor extends ImageFeatureExtractor{async _call(n){let a=await super._call(n),u=[a.pixel_values.dims[0],64,64];return a.pixel_mask=new Tensor("int64",new BigInt64Array(u.reduce((c,p)=>c*p)).fill(1n),u),a}center_to_corners_format([n,a,u,c]){return[n-u/2,a-c/2,n+u/2,a+c/2]}post_process_object_detection(n,a=.5,u=null){const c=n.logits,p=n.pred_boxes,[s,h,f]=c.dims;if(u!==null&&u.length!==s)throw Error("Make sure that you pass in as many target sizes as the batch dimension of the logits");let l=[];for(let o=0;o<s;++o){let t=u!==null?u[o]:null,e={boxes:[],classes:[],scores:[]},r=c[o],i=p[o];for(let d=0;d<h;++d){let g=r[d],m=max(g.data)[1];if(m===f-1)continue;let _=softmax(g.data)[m];if(_>a){let v=i[d].data;v=this.center_to_corners_format(v),t!==null&&(v=v.map((w,S)=>w*t[(S+1)%2])),e.boxes.push(v),e.classes.push(m),e.scores.push(_)}}l.push(e)}return l}remove_low_and_no_objects(n,a,u,c){let p=[],s=[],h=[];for(let f=0;f<n.dims[0];++f){let l=n[f],o=a[f],t=max(l.data)[1];if(t===c)continue;let r=softmax(l.data)[t];r>u&&(p.push(o),s.push(r),h.push(t))}return[p,s,h]}check_segment_validity(n,a,u,c=.5,p=.8){let s=[],h=0,f=0;for(let o=0;o<n.length;++o)n[o]===u&&(s.push(o),++h),a[u].data[o]>=c&&++f;let l=h>0&&f>0;return l&&(l=h/f>p),[l,s]}compute_segments(n,a,u,c,p,s=null,h=null){let[f,l]=h??n[0].dims,o=new Tensor("int32",new Int32Array(f*l),[f,l]),t=[];if(h!==null)for(let d=0;d<n.length;++d)n[d]=interpolate(n[d],h,"bilinear",!1);let e=new Int32Array(n[0].data.length),r=new Float32Array(n[0].data.length);for(let d=0;d<n.length;++d){let g=a[d];for(let m=0;m<n[d].data.length;++m)n[d].data[m]*=g,n[d].data[m]>r[m]&&(e[m]=d,r[m]=n[d].data[m])}let i=0;for(let d=0;d<u.length;++d){let g=u[d],[m,b]=this.check_segment_validity(e,n,d,c,p);if(m){++i;for(let _ of b)o.data[_]=i;t.push({id:i,label_id:g,score:a[d]})}}return[o,t]}post_process_panoptic_segmentation(n,a=.5,u=.5,c=.8,p=null,s=null){p===null&&(console.warn("`label_ids_to_fuse` unset. No instance will be fused."),p=new Set);const h=n.logits,l=n.pred_masks.sigmoid();let[o,t,e]=h.dims;if(e-=1,s!==null&&s.length!==o)throw Error("Make sure that you pass in as many target sizes as the batch dimension of the logits");let r=[];for(let i=0;i<o;++i){let d=s!==null?s[i]:null,g=h[i],m=l[i],[b,_,v]=this.remove_low_and_no_objects(g,m,a,e);if(v.length===0){let[A,O]=d??m.dims.slice(-2),x=new Tensor("int32",new Int32Array(A*O).fill(-1),[A,O]);r.push({segmentation:x,segments_info:[]});continue}let[w,S]=this.compute_segments(b,_,v,u,c,p,d);r.push({segmentation:w,segments_info:S})}return r}post_process_instance_segmentation(){throw Error("Not implemented yet")}}class SamImageProcessor extends ImageFeatureExtractor{async _call(n,a){let{pixel_values:u,original_sizes:c,reshaped_input_sizes:p}=await super._call(n),s=calculateDimensions(a);if(s.length===3)s=[1,...s],a=[a];else if(s.length!==4)throw Error("The input_points must be a 4D tensor of shape `batch_size`, `point_batch_size`, `nb_points_per_image`, `2`.");for(let f=0;f<a.length;++f){let l=c[f],o=p[f],t=[o[0]/l[0],o[1]/l[1]];for(let e=0;e<a[f].length;++e)for(let r=0;r<a[f][e].length;++r)for(let i=0;i<a[f][e][r].length;++i)a[f][e][r][i]*=t[i]}let h=new Tensor("int64",BigInt64Array.from(a.flat(1/0).map(f=>BigInt(Math.round(f)))),s);return{pixel_values:u,original_sizes:c,reshaped_input_sizes:p,input_points:h}}post_process_masks(n,a,u,{mask_threshold:c=0,binarize:p=!0,pad_size:s=null}={}){let h=[];s=s??this.pad_size;let f=[s.height,s.width];for(let l=0;l<a.length;++l){let o=a[l],t=u[l],e=n[l],r=[];for(let d=0;d<e.dims[0];++d){let g=e[d],m=interpolate(g,f,"bilinear",!1);m=m.slice(null,[0,t[0]],[0,t[1]]),m=interpolate(e,o,"bilinear",!1),p&&(m=new Tensor("bool",Array.from(m.data).map(b=>b>c),m.dims)),m.dims=[1,...m.dims],r.push(m)}let i=cat(r);h.push(i)}return h}}class WhisperFeatureExtractor extends FeatureExtractor{constructor(n){var a;super(n),(a=this.config).mel_filters??(a.mel_filters=getMelFilters(this.config.sampling_rate,this.config.n_fft,this.config.feature_size))}calcOffset(n,a){return Math.abs((n+a)%(2*a)-a)}padReflect(n,a,u){const c=new Float32Array(n.length+a+u),p=n.length-1;for(let s=0;s<n.length;++s)c[a+s]=n[s];for(let s=1;s<=a;++s)c[a-s]=n[this.calcOffset(s,p)];for(let s=1;s<=u;++s)c[p+a+s]=n[this.calcOffset(p-s,p)];return c}stft(n,a){const u=this.config.n_fft,c=2*(u-1),p=2*(2*u-1),s=2**Math.ceil(Math.log2(p)),h=u+2,f=new Float32Array(h*n.length),l=new Float32Array(p),o=new Float32Array(s),t=new Float32Array(s),e=new Float32Array(s),r=new Float32Array(s),i=new Float32Array(s),d=new Float32Array(s),g=-2*Math.PI/u,m=Math.cos(g),b=Math.sin(g);for(let w=0;w<p>>1;++w){const S=(w+1-u)**2/2,A=Math.sqrt(m**2+b**2)**S,O=S*Math.atan2(b,m);let x=2*w;l[x]=A*Math.cos(O),l[x+1]=A*Math.sin(O),o[x]=l[x],o[x+1]=-l[x+1]}const _=l.subarray(c,p),v=new FFT(s>>1);v.transform(r,o);for(let w=0;w<n.length;++w){const S=n[w];for(let O=0;O<_.length;O+=2){const x=O+1,I=O>>1,N=S[I]*a[I];t[O]=N*_[O],t[x]=N*_[x]}v.transform(i,t);for(let O=0;O<r.length;O+=2){const x=O+1;e[O]=i[O]*r[O]-i[x]*r[x],e[x]=i[O]*r[x]+i[x]*r[O]}v.inverseTransform(d,e);const A=w*h;for(let O=0;O<h;O+=2){const x=d[O+c],I=d[O+c+1],N=_[O],B=_[O+1],L=A+O;f[L]=x*N-I*B,f[L+1]=x*B+I*N}}return{data:f,dims:[n.length,h]}}fram_wave(n,a=!0){const u=[],c=Math.floor((this.config.n_fft-1)/2)+1,p=n.length;for(let s=0;s<p+1;s+=this.config.hop_length){let h;if(a){let f=s>c?s-c:0,l=s<p-c?s+c:p;h=n.subarray(f,l),f===0?h=this.padReflect(h,-s+c,0):l===p&&(h=this.padReflect(h,0,s-p+c))}else{h=new Float32Array(this.config.n_fft);const f=n.subarray(s,s+this.config.n_fft);f.length<this.config.n_fft?(h.set(f),h.fill(0,f.length,this.config.n_fft)):h=f}u.push(h)}return u}hanning(n){if(n<1)return[];if(n===1)return[1];const a=n-1,u=new Float32Array(a);for(let c=0;c<a;++c){const p=2*c-n+1;u[c]=.5+.5*Math.cos(Math.PI*p/a)}return u}_extract_fbank_features(n){const a=new Float32Array(this.config.n_samples);a.set(n);const u=this.hanning(this.config.n_fft+1),c=this.fram_wave(a),p=this.stft(c,u),s=p.data,h=p.dims[0]-1,f=p.dims[1]>>1,l=new Float32Array(h*f);for(let m=0;m<h;++m)for(let b=0;b<f;++b){let _=m*f+b,v=_<<1,w=s[v]**2+s[v+1]**2;l[_]=w}const o=this.config.mel_filters,t=o.length,e=new Float32Array(t*h);let r=0;for(let m=0;m<t;++m){const b=o[m];for(let _=0;_<h;++_){let v=0;for(let w=0;w<f;++w)v+=b[w]*l[_*f+w];e[r++]=v}}const i=1e-10,d=new Float32Array(e.length);let g=0;for(let m=0;m<e.length;++m){const b=Math.max(i,e[m]),_=Math.log10(b);d[m]=_,g=Math.max(_,g)}for(let m=0;m<d.length;++m)d[m]=Math.max(d[m],g-8),d[m]=(d[m]+4)/4;return{data:d,dims:[t,h]}}async _call(n){n.length>this.config.n_samples&&console.warn("Attempting to extract features for audio longer than 30 seconds. If using a pipeline to extract transcript from a long audio clip, remember to specify `chunk_length_s` and/or `stride_length_s`.");let a=n.slice(0,this.config.n_samples),u=this._extract_fbank_features(a);return{input_features:new Tensor("float32",u.data,[1,...u.dims])}}}class Processor extends Callable{constructor(n){super(),this.feature_extractor=n}async _call(n){return await this.feature_extractor(n)}}class SamProcessor extends Processor{async _call(n,a){return await this.feature_extractor(n,a)}post_process_masks(...n){return this.feature_extractor.post_process_masks(...n)}}class WhisperProcessor extends Processor{async _call(n){return await this.feature_extractor(n)}}class AutoProcessor{static async from_pretrained(n,{progress_callback:a=null,config:u=null,cache_dir:c=null,local_files_only:p=!1,revision:s="main"}={}){let h=u??await getModelJSON(n,"preprocessor_config.json",!0,{progress_callback:a,config:u,cache_dir:c,local_files_only:p,revision:s}),f=h.feature_extractor_type??h.image_processor_type,l=this.FEATURE_EXTRACTOR_CLASS_MAPPING[f];if(!l)if(h.size!==void 0)console.warn("Feature extractor type not specified, assuming ImageFeatureExtractor due to size parameter in config."),l=ImageFeatureExtractor;else throw new Error(`Unknown Feature Extractor type: ${h.feature_extractor_type}`);let o=this.PROCESSOR_CLASS_MAPPING[h.processor_class]??Processor,t=new l(h);return new o(t)}}Se(AutoProcessor,"FEATURE_EXTRACTOR_CLASS_MAPPING",{WhisperFeatureExtractor,ViTFeatureExtractor,MobileViTFeatureExtractor,DetrFeatureExtractor,SamImageProcessor}),Se(AutoProcessor,"PROCESSOR_CLASS_MAPPING",{WhisperProcessor,SamProcessor});async function prepareImages(y){return Array.isArray(y)||(y=[y]),y=await Promise.all(y.map(n=>RawImage.read(n))),y}class Pipeline extends Callable{constructor(n,a,u){super(),this.task=n,this.tokenizer=a,this.model=u}async dispose(){await this.model.dispose()}async _call(n){let a=this.tokenizer(n,{padding:!0,truncation:!0}),u=await this.model(a);return[a,u]}}class TextClassificationPipeline extends Pipeline{async _call(n,{topk:a=1}={}){let[u,c]=await super._call(n),p=this.model.config.id2label,s=[];for(let h of c.logits){let l=getTopItems(softmax(h.data),a).map(function(o){return{label:p[o[0]],score:o[1]}});a===1?s.push(...l):s.push(l)}return Array.isArray(n)||a===1?s:s[0]}}class TokenClassificationPipeline extends Pipeline{async _call(n,{ignore_labels:a=["O"]}={}){let u=Array.isArray(n);u||(n=[n]);let c=this.tokenizer,[p,s]=await super._call(n),h=s.logits,f=this.model.config.id2label,l=[];for(let o=0;o<h.dims[0];++o){let t=p.input_ids[o],e=h[o],r=[];for(let i=0;i<e.dims[0];++i){let d=e[i],g=max(d.data)[1],m=f[g];if(a.includes(m))continue;let b=c.decode([t[i].item()],{skip_special_tokens:!0});if(b==="")continue;let _=softmax(d.data);r.push({entity:m,score:_[g],index:i,word:b,start:null,end:null})}l.push(r)}return u?l:l[0]}}class QuestionAnsweringPipeline extends Pipeline{async _call(n,a,{topk:u=1}={}){let c=this.tokenizer(n,{text_pair:a}),p=await this.model(c),s=[];for(let h=0;h<p.start_logits.dims[0];++h){let f=c.input_ids[h],l=f.indexOf(this.tokenizer.sep_token_id),o=Array.from(softmax(p.start_logits[h].data)).map((r,i)=>[r,i]).filter(r=>r[1]>l),t=Array.from(softmax(p.end_logits[h].data)).map((r,i)=>[r,i]).filter(r=>r[1]>l),e=product(o,t).filter(r=>r[0][1]<=r[1][1]).map(r=>[r[0][1],r[1][1],r[0][0]*r[1][0]]).sort((r,i)=>i[2]-r[2]);for(let r=0;r<Math.min(e.length,u);++r){let[i,d,g]=e[r],m=[...f].slice(i,d+1),b=this.tokenizer.decode(m,{skip_special_tokens:!0});s.push({answer:b,score:g})}}return u===1?s[0]:s}}class FillMaskPipeline extends Pipeline{async _call(n,{topk:a=5}={}){let[u,c]=await super._call(n),p=this.tokenizer,s=[];for(let h=0;h<u.input_ids.dims[0];++h){let f=u.input_ids[h],l=f.indexOf(this.tokenizer.mask_token_id);if(l===-1)throw Error(`Mask token (${p.mask_token}) not found in text.`);let t=c.logits[h][l],e=getTopItems(softmax(t.data),a);s.push(e.map(r=>{let i=[...f];return i[l]=r[0],{score:r[1],token:r[0],token_str:p.model.vocab[r[0]],sequence:p.decode(i,{skip_special_tokens:!0})}}))}return Array.isArray(n)?s:s[0]}}class Text2TextGenerationPipeline extends Pipeline{constructor(){super(...arguments);Se(this,"_key",null)}async _call(a,u={}){Array.isArray(a)||(a=[a]),this.model.config.prefix&&(a=a.map(l=>this.model.config.prefix+l));let c=this.model.config.task_specific_params;c&&c[this.task]&&c[this.task].prefix&&(a=a.map(l=>c[this.task].prefix+l));let p={padding:!0,truncation:!0},s;this instanceof TranslationPipeline&&"_build_translation_inputs"in this.tokenizer?s=this.tokenizer._build_translation_inputs(a,p,u).input_ids:s=this.tokenizer(a,p).input_ids;let h=await this.model.generate(s,u),f=this.tokenizer.batch_decode(h,{skip_special_tokens:!0});return this._key!==null&&(f=f.map(l=>this._key===null?l:{[this._key]:l})),f}}class SummarizationPipeline extends Text2TextGenerationPipeline{constructor(){super(...arguments);Se(this,"_key","summary_text")}}class TranslationPipeline extends Text2TextGenerationPipeline{constructor(){super(...arguments);Se(this,"_key","translation_text")}}class TextGenerationPipeline extends Pipeline{async _call(n,a={}){let u=typeof n=="string"||n instanceof String;u&&(n=[n]),this.tokenizer.padding_side="left";let c=this.tokenizer(n,{padding:!0,truncation:!0}),p=c.input_ids,s=c.attention_mask,h=await this.model.generate(p,a,null,{inputs_attention_mask:s});const f=this.tokenizer.batch_decode(h,{skip_special_tokens:!0}),l=Array.from({length:n.length},o=>[]);for(let o=0;o<f.length;++o){const t=Math.floor(o/h.length*n.length);l[t].push({generated_text:f[o]})}return u&&l.length===1?l[0]:l}}class ZeroShotClassificationPipeline extends Pipeline{constructor(n,a,u){super(n,a,u),this.label2id=Object.fromEntries(Object.entries(this.model.config.label2id).map(([c,p])=>[c.toLowerCase(),p])),this.entailment_id=this.label2id.entailment,this.entailment_id===void 0&&(console.warn("Could not find 'entailment' in label2id mapping. Using 2 as entailment_id."),this.entailment_id=2),this.contradiction_id=this.label2id.contradiction,this.contradiction_id===void 0&&(console.warn("Could not find 'contradiction' in label2id mapping. Using 0 as contradiction_id."),this.contradiction_id=0)}async _call(n,a,{hypothesis_template:u="This example is {}.",multi_label:c=!1}={}){let p=Array.isArray(n);p||(n=[n]),Array.isArray(a)||(a=[a]);let s=a.map(l=>u.replace("{}",l)),h=c||a.length===1,f=[];for(let l of n){let o=[];for(let r of s){let i=this.tokenizer(l,{text_pair:r}),d=await this.model(i);h?o.push([d.logits.data[this.contradiction_id],d.logits.data[this.entailment_id]]):o.push(d.logits.data[this.entailment_id])}let t;h?t=o.map(r=>softmax(r)[1]):t=softmax(o);let e=t.map((r,i)=>[r,i]).sort((r,i)=>i[0]-r[0]);f.push({sequence:l,labels:e.map(r=>a[r[1]]),scores:e.map(r=>r[0])})}return p?f:f[0]}}class FeatureExtractionPipeline extends Pipeline{async _call(n,{pooling:a="none",normalize:u=!1}={}){let[c,p]=await super._call(n),s=p.last_hidden_state??p.logits;if(a!=="none")if(a==="mean")s=mean_pooling(s,c.attention_mask);else throw Error(`Pooling method '${a}' not supported.`);return u&&(s=s.normalize(2,-1)),s}}class AutomaticSpeechRecognitionPipeline extends Pipeline{constructor(n,a,u,c){super(n,a,u),this.processor=c}async _preprocess(n,a){return isString(n)&&(n=await read_audio(n,a)),n}async _call(n,a={}){let u=a.return_timestamps??!1,c=a.chunk_length_s??0,p=a.stride_length_s??null,s=a.chunk_callback??null,h=a.force_full_sequences??!1,f=pop(a,"language",null),l=pop(a,"task",null);if(f||l||u){if(a.forced_decoder_ids)throw new Error("Cannot specify `language`/`task`/`return_timestamps` and `forced_decoder_ids` at the same time.");let i=this.tokenizer.get_decoder_prompt_ids({language:f,task:l,no_timestamps:!u});i.length>0&&(a.forced_decoder_ids=i)}let o=!Array.isArray(n);o&&(n=[n]);const t=this.processor.feature_extractor.config.sampling_rate,e=this.processor.feature_extractor.config.chunk_length/this.model.config.max_source_positions;let r=[];for(let i of n){i=await this._preprocess(i,t);let d=[];if(c>0){if(p===null)p=c/6;else if(c<=p)throw Error("`chunk_length_s` must be larger than `stride_length_s`.");const b=t*c,_=t*p,v=b-2*_;let w=0;for(;w<i.length;){let S=i.subarray(w,w+b),A=await this.processor(S),O=w===0,x=w+v>=i.length;d.push({stride:[S.length,O?0:_,x?0:_],input_features:A.input_features,is_last:x}),w+=v}}else d=[{stride:[i.length,0,0],input_features:(await this.processor(i)).input_features,is_last:!0}];for(let b of d){let _=await this.model.generate(b.input_features,a);b.tokens=_[0],b.stride=b.stride.map(v=>v/t),s!==null&&s(b)}let[g,m]=this.tokenizer._decode_asr(d,{time_precision:e,return_timestamps:u,force_full_sequences:h});r.push({text:g,...m})}return o?r[0]:r}}class ImageToTextPipeline extends Pipeline{constructor(n,a,u,c){super(n,a,u),this.processor=c}async _call(n,a={}){let u=Array.isArray(n);n=await prepareImages(n);let{pixel_values:c}=await this.processor(n),p=[];for(let s of c){s.dims=[1,...s.dims];let h=await this.model.generate(s,a),f=this.tokenizer.batch_decode(h,{skip_special_tokens:!0}).map(l=>({generated_text:l.trim()}));p.push(f)}return u?p:p[0]}}class ImageClassificationPipeline extends Pipeline{constructor(n,a,u){super(n,null,a),this.processor=u}async _call(n,{topk:a=1}={}){let u=Array.isArray(n);n=await prepareImages(n);let{pixel_values:c}=await this.processor(n),p=await this.model({pixel_values:c}),s=this.model.config.id2label,h=[];for(let f of p.logits){let o=getTopItems(softmax(f.data),a).map(function(t){return{label:s[t[0]],score:t[1]}});a===1?h.push(...o):h.push(o)}return u||a===1?h:h[0]}}class ImageSegmentationPipeline extends Pipeline{constructor(n,a,u){super(n,null,a),this.processor=u,this.subtasks_mapping={panoptic:"post_process_panoptic_segmentation",instance:"post_process_instance_segmentation",semantic:"post_process_semantic_segmentation"}}async _call(n,{threshold:a=.5,mask_threshold:u=.5,overlap_mask_area_threshold:c=.8,label_ids_to_fuse:p=null,target_sizes:s=null,subtask:h=null}={}){if(Array.isArray(n)&&n.length!==1)throw Error("Image segmentation pipeline currently only supports a batch size of 1.");n=await prepareImages(n);let l=n.map(d=>[d.height,d.width]),{pixel_values:o,pixel_mask:t}=await this.processor(n),e=await this.model({pixel_values:o,pixel_mask:t}),r=null;if(h!==null)r=this.subtasks_mapping[h];else for(let[d,g]of Object.entries(this.subtasks_mapping))if(g in this.processor.feature_extractor){r=this.processor.feature_extractor[g].bind(this.processor.feature_extractor),h=d;break}let i=[];if(h==="panoptic"||h==="instance"){let d=r(e,a,u,c,p,s??l)[0],g=d.segmentation,m=this.model.config.id2label;for(let b of d.segments_info){let _=new Uint8ClampedArray(g.data.length);for(let w=0;w<g.data.length;++w)g.data[w]===b.id&&(_[w]=255);let v=new RawImage(_,g.dims[1],g.dims[0],1);i.push({score:b.score,label:m[b.label_id],mask:v})}}else throw Error(h==="semantic"?"semantic segmentation not yet supported.":`Subtask ${h} not supported.`);return i}}class ZeroShotImageClassificationPipeline extends Pipeline{constructor(n,a,u,c){super(n,a,u),this.processor=c}async _call(n,a,{hypothesis_template:u="This is a photo of {}"}={}){let c=Array.isArray(n);n=await prepareImages(n);let p=a.map(o=>u.replace("{}",o)),s=this.tokenizer(p,{padding:!0,truncation:!0}),{pixel_values:h}=await this.processor(n),f=await this.model({...s,pixel_values:h}),l=[];for(let o of f.logits_per_image){let t=softmax(o.data);l.push([...t].map((e,r)=>({score:e,label:a[r]})))}return c?l:l[0]}}class ObjectDetectionPipeline extends Pipeline{constructor(n,a,u){super(n,null,a),this.processor=u}async _call(n,{threshold:a=.9,percentage:u=!1}={}){let c=Array.isArray(n);if(c&&n.length!==1)throw Error("Object detection pipeline currently only supports a batch size of 1.");n=await prepareImages(n);let p=u?null:n.map(t=>[t.height,t.width]),{pixel_values:s,pixel_mask:h}=await this.processor(n),f=await this.model({pixel_values:s,pixel_mask:h}),l=this.processor.feature_extractor.post_process_object_detection(f,a,p),o=this.model.config.id2label;return l.forEach(t=>t.labels=t.classes.map(e=>o[e])),c?l:l[0]}}const SUPPORTED_TASKS={"text-classification":{tokenizer:AutoTokenizer,pipeline:TextClassificationPipeline,model:AutoModelForSequenceClassification,default:{model:"Xenova/distilbert-base-uncased-finetuned-sst-2-english"},type:"text"},"token-classification":{tokenizer:AutoTokenizer,pipeline:TokenClassificationPipeline,model:AutoModelForTokenClassification,default:{model:"Xenova/bert-base-multilingual-cased-ner-hrl"},type:"text"},"question-answering":{tokenizer:AutoTokenizer,pipeline:QuestionAnsweringPipeline,model:AutoModelForQuestionAnswering,default:{model:"Xenova/distilbert-base-cased-distilled-squad"},type:"text"},"fill-mask":{tokenizer:AutoTokenizer,pipeline:FillMaskPipeline,model:AutoModelForMaskedLM,default:{model:"Xenova/bert-base-uncased"},type:"text"},summarization:{tokenizer:AutoTokenizer,pipeline:SummarizationPipeline,model:AutoModelForSeq2SeqLM,default:{model:"Xenova/distilbart-cnn-6-6"},type:"text"},translation:{tokenizer:AutoTokenizer,pipeline:TranslationPipeline,model:AutoModelForSeq2SeqLM,default:{model:"Xenova/t5-small"},type:"text"},"text2text-generation":{tokenizer:AutoTokenizer,pipeline:Text2TextGenerationPipeline,model:AutoModelForSeq2SeqLM,default:{model:"Xenova/flan-t5-small"},type:"text"},"text-generation":{tokenizer:AutoTokenizer,pipeline:TextGenerationPipeline,model:AutoModelForCausalLM,default:{model:"Xenova/gpt2"},type:"text"},"zero-shot-classification":{tokenizer:AutoTokenizer,pipeline:ZeroShotClassificationPipeline,model:AutoModelForSequenceClassification,default:{model:"Xenova/distilbert-base-uncased-mnli"},type:"text"},"automatic-speech-recognition":{tokenizer:AutoTokenizer,pipeline:AutomaticSpeechRecognitionPipeline,model:AutoModelForSeq2SeqLM,processor:AutoProcessor,default:{model:"Xenova/whisper-tiny.en"},type:"multimodal"},"image-to-text":{tokenizer:AutoTokenizer,pipeline:ImageToTextPipeline,model:AutoModelForVision2Seq,processor:AutoProcessor,default:{model:"Xenova/vit-gpt2-image-captioning"},type:"multimodal"},"image-classification":{pipeline:ImageClassificationPipeline,model:AutoModelForImageClassification,processor:AutoProcessor,default:{model:"Xenova/vit-base-patch16-224"},type:"multimodal"},"image-segmentation":{pipeline:ImageSegmentationPipeline,model:AutoModelForImageSegmentation,processor:AutoProcessor,default:{model:"Xenova/detr-resnet-50-panoptic"},type:"multimodal"},"zero-shot-image-classification":{tokenizer:AutoTokenizer,pipeline:ZeroShotImageClassificationPipeline,model:AutoModel,processor:AutoProcessor,default:{model:"Xenova/clip-vit-base-patch32"},type:"multimodal"},"object-detection":{pipeline:ObjectDetectionPipeline,model:AutoModelForObjectDetection,processor:AutoProcessor,default:{model:"Xenova/detr-resnet-50"},type:"multimodal"},"feature-extraction":{tokenizer:AutoTokenizer,pipeline:FeatureExtractionPipeline,model:AutoModel,default:{model:"Xenova/all-MiniLM-L6-v2"},type:"text"}},TASK_ALIASES={"sentiment-analysis":"text-classification",ner:"token-classification",vqa:"visual-question-answering",asr:"automatic-speech-recognition",embeddings:"feature-extraction"};async function pipeline(y,n=null,{quantized:a=!0,progress_callback:u=null,config:c=null,cache_dir:p=null,local_files_only:s=!1,revision:h="main"}={}){y=TASK_ALIASES[y]??y;let f=SUPPORTED_TASKS[y.split("_",1)[0]];if(!f)throw Error(`Unsupported pipeline: ${y}. Must be one of [${Object.keys(SUPPORTED_TASKS)}]`);n||(n=f.default.model,console.log(`No model specified. Using default model: "${n}".`));let l=f.tokenizer,o=f.model,t=f.pipeline,e=f.processor,r=[],i={quantized:a,progress_callback:u,config:c,cache_dir:p,local_files_only:s,revision:h};l&&r.push(l.from_pretrained(n,i)),o&&r.push(o.from_pretrained(n,i)),e&&r.push(e.from_pretrained(n,i));let d=await Promise.all(r);return dispatchCallback(u,{status:"ready",task:y,model:n}),new t(y,...d)}function product(...y){return y.reduce((n,a)=>n.flatMap(u=>a.map(c=>[u,c])))}env.allowLocalModels=!1;class Singleton{constructor(n,a,u){this.tokenizer=n,this.model=a,this.quantized=u}static async getInstance(n=null){return this.instance===null&&(this.instance=pipeline(this.task,this.model,{quantized:this.quantized,progress_callback:n})),this.instance}}Se(Singleton,"task",null),Se(Singleton,"model",null),Se(Singleton,"quantized",null),Se(Singleton,"instance",null),self.addEventListener("message",async y=>{const n=y.data;if(n.action==="load"){await ImageClassificationPipelineSingleton.getInstance(),self.postMessage({status:"ready"});return}const a=new Uint8ClampedArray(n.image.data.length/4);for(let p=0;p<a.length;++p)a[p]=n.image.data[p*4+3];const u=new RawImage(a,n.image.width,n.image.height,1);let c=await classify(u);c!==null&&self.postMessage({status:"result",task:"image-classification",data:c})});class ImageClassificationPipelineSingleton extends Singleton{}Se(ImageClassificationPipelineSingleton,"task","image-classification"),Se(ImageClassificationPipelineSingleton,"model",`Xenova/${constants.DEFAULT_MODEL}`),Se(ImageClassificationPipelineSingleton,"quantized",constants.DEFAULT_QUANTIZED);const classify=async y=>await(await ImageClassificationPipelineSingleton.getInstance())(y,{topk:0}).catch(u=>(self.postMessage({status:"error",task:"image-classification",data:u}),null))})(); diff --git a/spaces/XzJosh/Carol-Bert-VITS2/text/symbols.py b/spaces/XzJosh/Carol-Bert-VITS2/text/symbols.py deleted file mode 100644 index 9dfae4e633829f20c4fd767b1c7a9198911ed801..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Carol-Bert-VITS2/text/symbols.py +++ /dev/null @@ -1,51 +0,0 @@ -punctuation = ['!', '?', '…', ",", ".", "'", '-'] -pu_symbols = punctuation + ["SP", "UNK"] -pad = '_' - -# chinese -zh_symbols = ['E', 'En', 'a', 'ai', 'an', 'ang', 'ao', 'b', 'c', 'ch', 'd', 'e', 'ei', 'en', 'eng', 'er', 'f', 'g', 'h', - 'i', 'i0', 'ia', 'ian', 'iang', 'iao', 'ie', 'in', 'ing', 'iong', 'ir', 'iu', 'j', 'k', 'l', 'm', 'n', 'o', - 'ong', - 'ou', 'p', 'q', 'r', 's', 'sh', 't', 'u', 'ua', 'uai', 'uan', 'uang', 'ui', 'un', 'uo', 'v', 'van', 've', 'vn', - 'w', 'x', 'y', 'z', 'zh', - "AA", "EE", "OO"] -num_zh_tones = 6 - -# japanese -ja_symbols = ['I', 'N', 'U', 'a', 'b', 'by', 'ch', 'cl', 'd', 'dy', 'e', 'f', 'g', 'gy', 'h', 'hy', 'i', 'j', 'k', 'ky', - 'm', 'my', 'n', 'ny', 'o', 'p', 'py', 'r', 'ry', 's', 'sh', 't', 'ts', 'u', 'V', 'w', 'y', 'z'] -num_ja_tones = 1 - -# English -en_symbols = ['aa', 'ae', 'ah', 'ao', 'aw', 'ay', 'b', 'ch', 'd', 'dh', 'eh', 'er', 'ey', 'f', 'g', 'hh', 'ih', 'iy', - 'jh', 'k', 'l', 'm', 'n', 'ng', 'ow', 'oy', 'p', 'r', 's', - 'sh', 't', 'th', 'uh', 'uw', 'V', 'w', 'y', 'z', 'zh'] -num_en_tones = 4 - -# combine all symbols -normal_symbols = sorted(set(zh_symbols + ja_symbols + en_symbols)) -symbols = [pad] + normal_symbols + pu_symbols -sil_phonemes_ids = [symbols.index(i) for i in pu_symbols] - -# combine all tones -num_tones = num_zh_tones + num_ja_tones + num_en_tones - -# language maps -language_id_map = { - 'ZH': 0, - "JA": 1, - "EN": 2 -} -num_languages = len(language_id_map.keys()) - -language_tone_start_map = { - 'ZH': 0, - "JA": num_zh_tones, - "EN": num_zh_tones + num_ja_tones -} - -if __name__ == '__main__': - a = set(zh_symbols) - b = set(en_symbols) - print(sorted(a&b)) - diff --git a/spaces/XzJosh/Echo-Bert-VITS2/app.py b/spaces/XzJosh/Echo-Bert-VITS2/app.py deleted file mode 100644 index 2c9b40f3e4a5a2341eee5e6cf3c2cec349652389..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Echo-Bert-VITS2/app.py +++ /dev/null @@ -1,160 +0,0 @@ -import sys, os - -if sys.platform == "darwin": - os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1" - -import logging - -logging.getLogger("numba").setLevel(logging.WARNING) -logging.getLogger("markdown_it").setLevel(logging.WARNING) -logging.getLogger("urllib3").setLevel(logging.WARNING) -logging.getLogger("matplotlib").setLevel(logging.WARNING) - -logging.basicConfig(level=logging.INFO, format="| %(name)s | %(levelname)s | %(message)s") - -logger = logging.getLogger(__name__) - -import torch -import argparse -import commons -import utils -from models import SynthesizerTrn -from text.symbols import symbols -from text import cleaned_text_to_sequence, get_bert -from text.cleaner import clean_text -import gradio as gr -import webbrowser - - -net_g = None - - -def get_text(text, language_str, hps): - norm_text, phone, tone, word2ph = clean_text(text, language_str) - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert = get_bert(norm_text, word2ph, language_str) - del word2ph - - assert bert.shape[-1] == len(phone) - - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - - return bert, phone, tone, language - -def infer(text, sdp_ratio, noise_scale, noise_scale_w, length_scale, sid): - global net_g - bert, phones, tones, lang_ids = get_text(text, "ZH", hps) - with torch.no_grad(): - x_tst=phones.to(device).unsqueeze(0) - tones=tones.to(device).unsqueeze(0) - lang_ids=lang_ids.to(device).unsqueeze(0) - bert = bert.to(device).unsqueeze(0) - x_tst_lengths = torch.LongTensor([phones.size(0)]).to(device) - del phones - speakers = torch.LongTensor([hps.data.spk2id[sid]]).to(device) - audio = net_g.infer(x_tst, x_tst_lengths, speakers, tones, lang_ids, bert, sdp_ratio=sdp_ratio - , noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale)[0][0,0].data.cpu().float().numpy() - del x_tst, tones, lang_ids, bert, x_tst_lengths, speakers - return audio - -def tts_fn(text, speaker, sdp_ratio, noise_scale, noise_scale_w, length_scale): - with torch.no_grad(): - audio = infer(text, sdp_ratio=sdp_ratio, noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale, sid=speaker) - return "Success", (hps.data.sampling_rate, audio) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--model_dir", default="./logs/Echo/G_3100.pth", help="path of your model") - parser.add_argument("--config_dir", default="./configs/config.json", help="path of your config file") - parser.add_argument("--share", default=False, help="make link public") - parser.add_argument("-d", "--debug", action="store_true", help="enable DEBUG-LEVEL log") - - args = parser.parse_args() - if args.debug: - logger.info("Enable DEBUG-LEVEL log") - logging.basicConfig(level=logging.DEBUG) - hps = utils.get_hparams_from_file(args.config_dir) - device = "cuda:0" if torch.cuda.is_available() else "cpu" - ''' - device = ( - "cuda:0" - if torch.cuda.is_available() - else ( - "mps" - if sys.platform == "darwin" and torch.backends.mps.is_available() - else "cpu" - ) - ) - ''' - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model).to(device) - _ = net_g.eval() - - _ = utils.load_checkpoint(args.model_dir, net_g, None, skip_optimizer=True) - - speaker_ids = hps.data.spk2id - speakers = list(speaker_ids.keys()) - with gr.Blocks() as app: - with gr.Row(): - with gr.Column(): - gr.Markdown(value=""" - 【AI黑桃影①】在线语音合成(Bert-Vits2)\n - 作者:Xz乔希 https://space.bilibili.com/5859321\n - 声音归属:黑桃影 https://space.bilibili.com/456368455\n - 【AI黑桃影②】https://huggingface.co/spaces/XzJosh/Spade-Bert-VITS2\n - Bert-VITS2项目:https://github.com/Stardust-minus/Bert-VITS2\n - 使用本模型请严格遵守法律法规!\n - 发布二创作品请标注本项目作者及链接、作品使用Bert-VITS2 AI生成!\n - """) - text = gr.TextArea(label="Text", placeholder="Input Text Here", - value="星空下的白色幻影,怪盗斯倍的埃叩参上!") - speaker = gr.Dropdown(choices=speakers, value=speakers[0], label='Speaker') - sdp_ratio = gr.Slider(minimum=0.1, maximum=1, value=0.2, step=0.01, label='SDP/DP混合比') - noise_scale = gr.Slider(minimum=0.1, maximum=1, value=0.5, step=0.01, label='感情调节') - noise_scale_w = gr.Slider(minimum=0.1, maximum=1, value=0.9, step=0.01, label='音素长度') - length_scale = gr.Slider(minimum=0.1, maximum=2, value=1, step=0.01, label='生成长度') - btn = gr.Button("点击生成", variant="primary") - with gr.Column(): - text_output = gr.Textbox(label="Message") - audio_output = gr.Audio(label="Output Audio") - gr.Markdown(value=""" - 【AI塔菲】https://huggingface.co/spaces/XzJosh/Taffy-Bert-VITS2\n - 【AI东雪莲】https://huggingface.co/spaces/XzJosh/Azuma-Bert-VITS2\n - 【AI奶绿】https://huggingface.co/spaces/XzJosh/LAPLACE-Bert-VITS2\n - 【AI七海】https://huggingface.co/spaces/XzJosh/Nana7mi-Bert-VITS2\n - 【AI阿梓】https://huggingface.co/spaces/XzJosh/Azusa-Bert-VITS2\n - 【AI嘉然】https://huggingface.co/spaces/XzJosh/Diana-Bert-VITS2\n - 【AI向晚】https://huggingface.co/spaces/XzJosh/Ava-Bert-VITS2\n - 【AI乃琳】https://huggingface.co/spaces/XzJosh/Eileen-Bert-VITS2\n - 【AI贝拉】https://huggingface.co/spaces/XzJosh/Bella-Bert-VITS2\n - 【AI珈乐】https://huggingface.co/spaces/XzJosh/Carol-Bert-VITS2\n - 【AI电棍】https://huggingface.co/spaces/XzJosh/otto-Bert-VITS2\n - 【AI星瞳】https://huggingface.co/spaces/XzJosh/XingTong-Bert-VITS2\n - 【AI尼奈】https://huggingface.co/spaces/XzJosh/nine1-Bert-VITS2\n - 【AI扇宝】https://huggingface.co/spaces/XzJosh/ShanBao-Bert-VITS2\n - 【AI剑魔】https://huggingface.co/spaces/XzJosh/Aatrox-Bert-VITS2\n - 【AI恬豆】https://huggingface.co/spaces/XzJosh/Bekki-Bert-VITS2\n - """) - btn.click(tts_fn, - inputs=[text, speaker, sdp_ratio, noise_scale, noise_scale_w, length_scale], - outputs=[text_output, audio_output]) - -# webbrowser.open("http://127.0.0.1:6006") -# app.launch(server_port=6006, show_error=True) - - app.launch(show_error=True) diff --git a/spaces/XzJosh/ShanBao-Bert-VITS2/text/chinese_bert.py b/spaces/XzJosh/ShanBao-Bert-VITS2/text/chinese_bert.py deleted file mode 100644 index cb84ce0b426cd0a1c7954ddcdf41322c10ed14fa..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/ShanBao-Bert-VITS2/text/chinese_bert.py +++ /dev/null @@ -1,50 +0,0 @@ -import torch -from transformers import AutoTokenizer, AutoModelForMaskedLM - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -tokenizer = AutoTokenizer.from_pretrained("./bert/chinese-roberta-wwm-ext-large") -model = AutoModelForMaskedLM.from_pretrained("./bert/chinese-roberta-wwm-ext-large").to(device) - -def get_bert_feature(text, word2ph): - with torch.no_grad(): - inputs = tokenizer(text, return_tensors='pt') - for i in inputs: - inputs[i] = inputs[i].to(device) - res = model(**inputs, output_hidden_states=True) - res = torch.cat(res['hidden_states'][-3:-2], -1)[0].cpu() - - assert len(word2ph) == len(text)+2 - word2phone = word2ph - phone_level_feature = [] - for i in range(len(word2phone)): - repeat_feature = res[i].repeat(word2phone[i], 1) - phone_level_feature.append(repeat_feature) - - phone_level_feature = torch.cat(phone_level_feature, dim=0) - - - return phone_level_feature.T - -if __name__ == '__main__': - # feature = get_bert_feature('你好,我是说的道理。') - import torch - - word_level_feature = torch.rand(38, 1024) # 12个词,每个词1024维特征 - word2phone = [1, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 2, 2, 2, 1, 1, 2, 2, 1, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 1] - - # 计算总帧数 - total_frames = sum(word2phone) - print(word_level_feature.shape) - print(word2phone) - phone_level_feature = [] - for i in range(len(word2phone)): - print(word_level_feature[i].shape) - - # 对每个词重复word2phone[i]次 - repeat_feature = word_level_feature[i].repeat(word2phone[i], 1) - phone_level_feature.append(repeat_feature) - - phone_level_feature = torch.cat(phone_level_feature, dim=0) - print(phone_level_feature.shape) # torch.Size([36, 1024]) - diff --git a/spaces/XzJosh/Wenjing-Bert-VITS2/text/english.py b/spaces/XzJosh/Wenjing-Bert-VITS2/text/english.py deleted file mode 100644 index 781d0a56cef71f66fc67db51d76538be90d3ddd2..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Wenjing-Bert-VITS2/text/english.py +++ /dev/null @@ -1,138 +0,0 @@ -import pickle -import os -import re -from g2p_en import G2p -from string import punctuation - -from text import symbols - -current_file_path = os.path.dirname(__file__) -CMU_DICT_PATH = os.path.join(current_file_path, 'cmudict.rep') -CACHE_PATH = os.path.join(current_file_path, 'cmudict_cache.pickle') -_g2p = G2p() - -arpa = {'AH0', 'S', 'AH1', 'EY2', 'AE2', 'EH0', 'OW2', 'UH0', 'NG', 'B', 'G', 'AY0', 'M', 'AA0', 'F', 'AO0', 'ER2', 'UH1', 'IY1', 'AH2', 'DH', 'IY0', 'EY1', 'IH0', 'K', 'N', 'W', 'IY2', 'T', 'AA1', 'ER1', 'EH2', 'OY0', 'UH2', 'UW1', 'Z', 'AW2', 'AW1', 'V', 'UW2', 'AA2', 'ER', 'AW0', 'UW0', 'R', 'OW1', 'EH1', 'ZH', 'AE0', 'IH2', 'IH', 'Y', 'JH', 'P', 'AY1', 'EY0', 'OY2', 'TH', 'HH', 'D', 'ER0', 'CH', 'AO1', 'AE1', 'AO2', 'OY1', 'AY2', 'IH1', 'OW0', 'L', 'SH'} - - -def post_replace_ph(ph): - rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - 'v': "V" - } - if ph in rep_map.keys(): - ph = rep_map[ph] - if ph in symbols: - return ph - if ph not in symbols: - ph = 'UNK' - return ph - -def read_dict(): - g2p_dict = {} - start_line = 49 - with open(CMU_DICT_PATH) as f: - line = f.readline() - line_index = 1 - while line: - if line_index >= start_line: - line = line.strip() - word_split = line.split(' ') - word = word_split[0] - - syllable_split = word_split[1].split(' - ') - g2p_dict[word] = [] - for syllable in syllable_split: - phone_split = syllable.split(' ') - g2p_dict[word].append(phone_split) - - line_index = line_index + 1 - line = f.readline() - - return g2p_dict - - -def cache_dict(g2p_dict, file_path): - with open(file_path, 'wb') as pickle_file: - pickle.dump(g2p_dict, pickle_file) - - -def get_dict(): - if os.path.exists(CACHE_PATH): - with open(CACHE_PATH, 'rb') as pickle_file: - g2p_dict = pickle.load(pickle_file) - else: - g2p_dict = read_dict() - cache_dict(g2p_dict, CACHE_PATH) - - return g2p_dict - -eng_dict = get_dict() - -def refine_ph(phn): - tone = 0 - if re.search(r'\d$', phn): - tone = int(phn[-1]) + 1 - phn = phn[:-1] - return phn.lower(), tone - -def refine_syllables(syllables): - tones = [] - phonemes = [] - for phn_list in syllables: - for i in range(len(phn_list)): - phn = phn_list[i] - phn, tone = refine_ph(phn) - phonemes.append(phn) - tones.append(tone) - return phonemes, tones - - -def text_normalize(text): - # todo: eng text normalize - return text - -def g2p(text): - - phones = [] - tones = [] - words = re.split(r"([,;.\-\?\!\s+])", text) - for w in words: - if w.upper() in eng_dict: - phns, tns = refine_syllables(eng_dict[w.upper()]) - phones += phns - tones += tns - else: - phone_list = list(filter(lambda p: p != " ", _g2p(w))) - for ph in phone_list: - if ph in arpa: - ph, tn = refine_ph(ph) - phones.append(ph) - tones.append(tn) - else: - phones.append(ph) - tones.append(0) - # todo: implement word2ph - word2ph = [1 for i in phones] - - phones = [post_replace_ph(i) for i in phones] - return phones, tones, word2ph - -if __name__ == "__main__": - # print(get_dict()) - # print(eng_word_to_phoneme("hello")) - print(g2p("In this paper, we propose 1 DSPGAN, a GAN-based universal vocoder.")) - # all_phones = set() - # for k, syllables in eng_dict.items(): - # for group in syllables: - # for ph in group: - # all_phones.add(ph) - # print(all_phones) \ No newline at end of file diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/.github/workflows/levenshtein.js b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/.github/workflows/levenshtein.js deleted file mode 100644 index 67a5e3613c0072d124035ee8933a23de2105cfe3..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/.github/workflows/levenshtein.js +++ /dev/null @@ -1,44 +0,0 @@ -/* -Copyright (c) 2011 Andrei Mackenzie - -Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -*/ - -// Compute the edit distance between the two given strings -exports.getEditDistance = function(a, b){ - if(a.length == 0) return b.length; - if(b.length == 0) return a.length; - - var matrix = []; - - // increment along the first column of each row - var i; - for(i = 0; i <= b.length; i++){ - matrix[i] = [i]; - } - - // increment each column in the first row - var j; - for(j = 0; j <= a.length; j++){ - matrix[0][j] = j; - } - - // Fill in the rest of the matrix - for(i = 1; i <= b.length; i++){ - for(j = 1; j <= a.length; j++){ - if(b.charAt(i-1) == a.charAt(j-1)){ - matrix[i][j] = matrix[i-1][j-1]; - } else { - matrix[i][j] = Math.min(matrix[i-1][j-1] + 1, // substitution - Math.min(matrix[i][j-1] + 1, // insertion - matrix[i-1][j] + 1)); // deletion - } - } - } - - return matrix[b.length][a.length]; -}; diff --git a/spaces/Yudha515/Rvc-Models/audiocraft/utils/notebook.py b/spaces/Yudha515/Rvc-Models/audiocraft/utils/notebook.py deleted file mode 100644 index 019b9d19e5bef976bedddf428fd25da42a8a9726..0000000000000000000000000000000000000000 --- a/spaces/Yudha515/Rvc-Models/audiocraft/utils/notebook.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -try: - import IPython.display as ipd # type: ignore -except ImportError: - # Note in a notebook... - pass - - -import torch - - -def display_audio(samples: torch.Tensor, sample_rate: int): - """Renders an audio player for the given audio samples. - - Args: - samples (torch.Tensor): a Tensor of decoded audio samples - with shapes [B, C, T] or [C, T] - sample_rate (int): sample rate audio should be displayed with. - """ - assert samples.dim() == 2 or samples.dim() == 3 - - samples = samples.detach().cpu() - if samples.dim() == 2: - samples = samples[None, ...] - - for audio in samples: - ipd.display(ipd.Audio(audio, rate=sample_rate)) diff --git a/spaces/Yuliang/ICON/lib/net/BasePIFuNet.py b/spaces/Yuliang/ICON/lib/net/BasePIFuNet.py deleted file mode 100644 index d1e5986298a38fc993b697169ec95b1007b58755..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ICON/lib/net/BasePIFuNet.py +++ /dev/null @@ -1,84 +0,0 @@ - -# -*- coding: utf-8 -*- - -# Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. (MPG) is -# holder of all proprietary rights on this computer program. -# You can only use this computer program if you have closed -# a license agreement with MPG or you get the right to use the computer -# program from someone who is authorized to grant you that right. -# Any use of the computer program without a valid license is prohibited and -# liable to prosecution. -# -# Copyright©2019 Max-Planck-Gesellschaft zur Förderung -# der Wissenschaften e.V. (MPG). acting on behalf of its Max Planck Institute -# for Intelligent Systems. All rights reserved. -# -# Contact: ps-license@tuebingen.mpg.de - -import torch.nn as nn -import pytorch_lightning as pl - -from .geometry import index, orthogonal, perspective - - -class BasePIFuNet(pl.LightningModule): - def __init__( - self, - projection_mode='orthogonal', - error_term=nn.MSELoss(), - ): - """ - :param projection_mode: - Either orthogonal or perspective. - It will call the corresponding function for projection. - :param error_term: - nn Loss between the predicted [B, Res, N] and the label [B, Res, N] - """ - super(BasePIFuNet, self).__init__() - self.name = 'base' - - self.error_term = error_term - - self.index = index - self.projection = orthogonal if projection_mode == 'orthogonal' else perspective - - def forward(self, points, images, calibs, transforms=None): - ''' - :param points: [B, 3, N] world space coordinates of points - :param images: [B, C, H, W] input images - :param calibs: [B, 3, 4] calibration matrices for each image - :param transforms: Optional [B, 2, 3] image space coordinate transforms - :return: [B, Res, N] predictions for each point - ''' - features = self.filter(images) - preds = self.query(features, points, calibs, transforms) - return preds - - def filter(self, images): - ''' - Filter the input images - store all intermediate features. - :param images: [B, C, H, W] input images - ''' - return None - - def query(self, features, points, calibs, transforms=None): - ''' - Given 3D points, query the network predictions for each point. - Image features should be pre-computed before this call. - store all intermediate features. - query() function may behave differently during training/testing. - :param points: [B, 3, N] world space coordinates of points - :param calibs: [B, 3, 4] calibration matrices for each image - :param transforms: Optional [B, 2, 3] image space coordinate transforms - :param labels: Optional [B, Res, N] gt labeling - :return: [B, Res, N] predictions for each point - ''' - return None - - def get_error(self, preds, labels): - ''' - Get the network loss from the last query - :return: loss term - ''' - return self.error_term(preds, labels) diff --git a/spaces/Yusin/Speech-ChatGPT-Speech/app.py b/spaces/Yusin/Speech-ChatGPT-Speech/app.py deleted file mode 100644 index 5cbfe278ded49478a296a84d7c3e2dea51bfbaa4..0000000000000000000000000000000000000000 --- a/spaces/Yusin/Speech-ChatGPT-Speech/app.py +++ /dev/null @@ -1,132 +0,0 @@ -import tempfile -import gradio as gr -from neon_tts_plugin_coqui import CoquiTTS -LANGUAGES = list(CoquiTTS.langs.keys()) -LANGUAGES = LANGUAGES + ['cn', 'jp'] -default_lang = "en" -#import whisper -#whisper_model = whisper.load_model("small") -#whisper = gr.Interface.load(name="spaces/abidlabs/whisper-large-v2") -whisper = gr.Interface.load(name="spaces/sanchit-gandhi/whisper-large-v2") -#cn_a_jp = gr.Blocks.load(name="spaces/Yusin/anime-tts_yusin") -#chatgpt = gr.Blocks.load(name="spaces/fffiloni/whisper-to-chatGPT") -#chatgpt = gr.Blocks.load(name="spaces/seawolf2357/chatgptclone") -import os -import json -import openai -#session_token = os.environ.get('SessionToken') -api_key = os.environ.get('api_key') -#if you have OpenAI API key as a string, enable the below -openai.api_key = api_key - -title = "Speech to ChatGPT to Speech" -#info = "more info at [Neon Coqui TTS Plugin](https://github.com/NeonGeckoCom/neon-tts-plugin-coqui), [Coqui TTS](https://github.com/coqui-ai/TTS)" -#badge = "https://visitor-badge-reloaded.herokuapp.com/badge?page_id=neongeckocom.neon-tts-plugin-coqui" -coquiTTS = CoquiTTS() - - -# ChatGPT -def chat_hf(audio, custom_token, language): - try: - whisper_text = translate(audio) - if whisper_text == "ERROR: You have to either use the microphone or upload an audio file": - gpt_response = "MISSING AUDIO: Record your voice by clicking the microphone button, do not forget to stop recording before sending your message ;)" - else: - #gpt_response = chatgpt(whisper_text, [], fn_index=0) - #print(gpt_response) - #gpt_response = gpt_response[0] - gpt_response = openai_create(whisper_text) - - except: - whisper_text = translate(audio) - gpt_response = """Sorry, I'm quite busy right now, but please try again later :)""" - - # to voice - with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as fp: - coquiTTS.get_tts(gpt_response, fp, speaker = {"language" : language}) - - return whisper_text, gpt_response, fp.name - -# whisper -#def translate(audio): -# print(""" -# — -# Sending audio to Whisper ... -# — -# """) -# -# audio = whisper.load_audio(audio) -# audio = whisper.pad_or_trim(audio) -# -# mel = whisper.log_mel_spectrogram(audio).to(whisper_model.device) -# -# _, probs = whisper_model.detect_language(mel) -# -# transcript_options = whisper.DecodingOptions(task="transcribe", fp16 = False) -# -# transcription = whisper.decode(whisper_model, mel, transcript_options) -# -# print("language spoken: " + transcription.language) -# print("transcript: " + transcription.text) -# print("———————————————————————————————————————————") -# -# return transcription.text - -def translate(audio): - print(""" - — - Sending audio to Whisper ... - — - """) - #_, text_result = whisper(audio, "", fn_index=0) - text_result = whisper(audio, None, "transcribe", fn_index=0) - print(text_result) - return text_result - - -def openai_create(prompt): - - response = openai.Completion.create( - model="text-chat-davinci-002-20221122", - prompt=prompt, - temperature=0.9, - max_tokens=150, - top_p=1, - frequency_penalty=0, - presence_penalty=0.6, - stop=[" Human:", " AI:"] - ) - print(response.choices[0].text) - return response.choices[0].text - -with gr.Blocks() as blocks: - gr.Markdown("<h1 style='text-align: center; margin-bottom: 1rem'>" - + title - + "</h1>") - #gr.Markdown(description) - radio = gr.Radio(label="Language", choices=LANGUAGES, value=default_lang) - with gr.Row(equal_height=True):# equal_height=False - with gr.Column():# variant="panel" - audio_file = gr.Audio(source="microphone", type="filepath") - custom_token = gr.Textbox(label='If it fails, use your own session token', placeholder="your own session token") - with gr.Row():# mobile_collapse=False - submit = gr.Button("Submit", variant="primary") - with gr.Column(): - text1 = gr.Textbox(label="Speech to Text") - text2 = gr.Textbox(label="ChatGPT Response") - audio = gr.Audio(label="Output", interactive=False) - #gr.Markdown(info) - #gr.Markdown("<center>" - # +f'<img src={badge} alt="visitors badge"/>' - # +"</center>") - - # actions - submit.click( - chat_hf, - [audio_file, custom_token, radio], - [text1, text2, audio], - ) - #radio.change(lambda lang: CoquiTTS.langs[lang]["sentence"], radio, text2) - - -blocks.launch(debug=True) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/roi_heads/mask_heads/mask_point_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/roi_heads/mask_heads/mask_point_head.py deleted file mode 100644 index fb92903a9488a44b984a489a354d838cc88f8ad4..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/roi_heads/mask_heads/mask_point_head.py +++ /dev/null @@ -1,300 +0,0 @@ -# Modified from https://github.com/facebookresearch/detectron2/tree/master/projects/PointRend/point_head/point_head.py # noqa - -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, normal_init -from mmcv.ops import point_sample, rel_roi_point_to_rel_img_point - -from mmdet.models.builder import HEADS, build_loss - - -@HEADS.register_module() -class MaskPointHead(nn.Module): - """A mask point head use in PointRend. - - ``MaskPointHead`` use shared multi-layer perceptron (equivalent to - nn.Conv1d) to predict the logit of input points. The fine-grained feature - and coarse feature will be concatenate together for predication. - - Args: - num_fcs (int): Number of fc layers in the head. Default: 3. - in_channels (int): Number of input channels. Default: 256. - fc_channels (int): Number of fc channels. Default: 256. - num_classes (int): Number of classes for logits. Default: 80. - class_agnostic (bool): Whether use class agnostic classification. - If so, the output channels of logits will be 1. Default: False. - coarse_pred_each_layer (bool): Whether concatenate coarse feature with - the output of each fc layer. Default: True. - conv_cfg (dict | None): Dictionary to construct and config conv layer. - Default: dict(type='Conv1d')) - norm_cfg (dict | None): Dictionary to construct and config norm layer. - Default: None. - loss_point (dict): Dictionary to construct and config loss layer of - point head. Default: dict(type='CrossEntropyLoss', use_mask=True, - loss_weight=1.0). - """ - - def __init__(self, - num_classes, - num_fcs=3, - in_channels=256, - fc_channels=256, - class_agnostic=False, - coarse_pred_each_layer=True, - conv_cfg=dict(type='Conv1d'), - norm_cfg=None, - act_cfg=dict(type='ReLU'), - loss_point=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0)): - super().__init__() - self.num_fcs = num_fcs - self.in_channels = in_channels - self.fc_channels = fc_channels - self.num_classes = num_classes - self.class_agnostic = class_agnostic - self.coarse_pred_each_layer = coarse_pred_each_layer - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.loss_point = build_loss(loss_point) - - fc_in_channels = in_channels + num_classes - self.fcs = nn.ModuleList() - for _ in range(num_fcs): - fc = ConvModule( - fc_in_channels, - fc_channels, - kernel_size=1, - stride=1, - padding=0, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg) - self.fcs.append(fc) - fc_in_channels = fc_channels - fc_in_channels += num_classes if self.coarse_pred_each_layer else 0 - - out_channels = 1 if self.class_agnostic else self.num_classes - self.fc_logits = nn.Conv1d( - fc_in_channels, out_channels, kernel_size=1, stride=1, padding=0) - - def init_weights(self): - """Initialize last classification layer of MaskPointHead, conv layers - are already initialized by ConvModule.""" - normal_init(self.fc_logits, std=0.001) - - def forward(self, fine_grained_feats, coarse_feats): - """Classify each point base on fine grained and coarse feats. - - Args: - fine_grained_feats (Tensor): Fine grained feature sampled from FPN, - shape (num_rois, in_channels, num_points). - coarse_feats (Tensor): Coarse feature sampled from CoarseMaskHead, - shape (num_rois, num_classes, num_points). - - Returns: - Tensor: Point classification results, - shape (num_rois, num_class, num_points). - """ - - x = torch.cat([fine_grained_feats, coarse_feats], dim=1) - for fc in self.fcs: - x = fc(x) - if self.coarse_pred_each_layer: - x = torch.cat((x, coarse_feats), dim=1) - return self.fc_logits(x) - - def get_targets(self, rois, rel_roi_points, sampling_results, gt_masks, - cfg): - """Get training targets of MaskPointHead for all images. - - Args: - rois (Tensor): Region of Interest, shape (num_rois, 5). - rel_roi_points: Points coordinates relative to RoI, shape - (num_rois, num_points, 2). - sampling_results (:obj:`SamplingResult`): Sampling result after - sampling and assignment. - gt_masks (Tensor) : Ground truth segmentation masks of - corresponding boxes, shape (num_rois, height, width). - cfg (dict): Training cfg. - - Returns: - Tensor: Point target, shape (num_rois, num_points). - """ - - num_imgs = len(sampling_results) - rois_list = [] - rel_roi_points_list = [] - for batch_ind in range(num_imgs): - inds = (rois[:, 0] == batch_ind) - rois_list.append(rois[inds]) - rel_roi_points_list.append(rel_roi_points[inds]) - pos_assigned_gt_inds_list = [ - res.pos_assigned_gt_inds for res in sampling_results - ] - cfg_list = [cfg for _ in range(num_imgs)] - - point_targets = map(self._get_target_single, rois_list, - rel_roi_points_list, pos_assigned_gt_inds_list, - gt_masks, cfg_list) - point_targets = list(point_targets) - - if len(point_targets) > 0: - point_targets = torch.cat(point_targets) - - return point_targets - - def _get_target_single(self, rois, rel_roi_points, pos_assigned_gt_inds, - gt_masks, cfg): - """Get training target of MaskPointHead for each image.""" - num_pos = rois.size(0) - num_points = cfg.num_points - if num_pos > 0: - gt_masks_th = ( - gt_masks.to_tensor(rois.dtype, rois.device).index_select( - 0, pos_assigned_gt_inds)) - gt_masks_th = gt_masks_th.unsqueeze(1) - rel_img_points = rel_roi_point_to_rel_img_point( - rois, rel_roi_points, gt_masks_th.shape[2:]) - point_targets = point_sample(gt_masks_th, - rel_img_points).squeeze(1) - else: - point_targets = rois.new_zeros((0, num_points)) - return point_targets - - def loss(self, point_pred, point_targets, labels): - """Calculate loss for MaskPointHead. - - Args: - point_pred (Tensor): Point predication result, shape - (num_rois, num_classes, num_points). - point_targets (Tensor): Point targets, shape (num_roi, num_points). - labels (Tensor): Class label of corresponding boxes, - shape (num_rois, ) - - Returns: - dict[str, Tensor]: a dictionary of point loss components - """ - - loss = dict() - if self.class_agnostic: - loss_point = self.loss_point(point_pred, point_targets, - torch.zeros_like(labels)) - else: - loss_point = self.loss_point(point_pred, point_targets, labels) - loss['loss_point'] = loss_point - return loss - - def _get_uncertainty(self, mask_pred, labels): - """Estimate uncertainty based on pred logits. - - We estimate uncertainty as L1 distance between 0.0 and the logits - prediction in 'mask_pred' for the foreground class in `classes`. - - Args: - mask_pred (Tensor): mask predication logits, shape (num_rois, - num_classes, mask_height, mask_width). - - labels (list[Tensor]): Either predicted or ground truth label for - each predicted mask, of length num_rois. - - Returns: - scores (Tensor): Uncertainty scores with the most uncertain - locations having the highest uncertainty score, - shape (num_rois, 1, mask_height, mask_width) - """ - if mask_pred.shape[1] == 1: - gt_class_logits = mask_pred.clone() - else: - inds = torch.arange(mask_pred.shape[0], device=mask_pred.device) - gt_class_logits = mask_pred[inds, labels].unsqueeze(1) - return -torch.abs(gt_class_logits) - - def get_roi_rel_points_train(self, mask_pred, labels, cfg): - """Get ``num_points`` most uncertain points with random points during - train. - - Sample points in [0, 1] x [0, 1] coordinate space based on their - uncertainty. The uncertainties are calculated for each point using - '_get_uncertainty()' function that takes point's logit prediction as - input. - - Args: - mask_pred (Tensor): A tensor of shape (num_rois, num_classes, - mask_height, mask_width) for class-specific or class-agnostic - prediction. - labels (list): The ground truth class for each instance. - cfg (dict): Training config of point head. - - Returns: - point_coords (Tensor): A tensor of shape (num_rois, num_points, 2) - that contains the coordinates sampled points. - """ - num_points = cfg.num_points - oversample_ratio = cfg.oversample_ratio - importance_sample_ratio = cfg.importance_sample_ratio - assert oversample_ratio >= 1 - assert 0 <= importance_sample_ratio <= 1 - batch_size = mask_pred.shape[0] - num_sampled = int(num_points * oversample_ratio) - point_coords = torch.rand( - batch_size, num_sampled, 2, device=mask_pred.device) - point_logits = point_sample(mask_pred, point_coords) - # It is crucial to calculate uncertainty based on the sampled - # prediction value for the points. Calculating uncertainties of the - # coarse predictions first and sampling them for points leads to - # incorrect results. To illustrate this: assume uncertainty func( - # logits)=-abs(logits), a sampled point between two coarse - # predictions with -1 and 1 logits has 0 logits, and therefore 0 - # uncertainty value. However, if we calculate uncertainties for the - # coarse predictions first, both will have -1 uncertainty, - # and sampled point will get -1 uncertainty. - point_uncertainties = self._get_uncertainty(point_logits, labels) - num_uncertain_points = int(importance_sample_ratio * num_points) - num_random_points = num_points - num_uncertain_points - idx = torch.topk( - point_uncertainties[:, 0, :], k=num_uncertain_points, dim=1)[1] - shift = num_sampled * torch.arange( - batch_size, dtype=torch.long, device=mask_pred.device) - idx += shift[:, None] - point_coords = point_coords.view(-1, 2)[idx.view(-1), :].view( - batch_size, num_uncertain_points, 2) - if num_random_points > 0: - rand_roi_coords = torch.rand( - batch_size, num_random_points, 2, device=mask_pred.device) - point_coords = torch.cat((point_coords, rand_roi_coords), dim=1) - return point_coords - - def get_roi_rel_points_test(self, mask_pred, pred_label, cfg): - """Get ``num_points`` most uncertain points during test. - - Args: - mask_pred (Tensor): A tensor of shape (num_rois, num_classes, - mask_height, mask_width) for class-specific or class-agnostic - prediction. - pred_label (list): The predication class for each instance. - cfg (dict): Testing config of point head. - - Returns: - point_indices (Tensor): A tensor of shape (num_rois, num_points) - that contains indices from [0, mask_height x mask_width) of the - most uncertain points. - point_coords (Tensor): A tensor of shape (num_rois, num_points, 2) - that contains [0, 1] x [0, 1] normalized coordinates of the - most uncertain points from the [mask_height, mask_width] grid . - """ - num_points = cfg.subdivision_num_points - uncertainty_map = self._get_uncertainty(mask_pred, pred_label) - num_rois, _, mask_height, mask_width = uncertainty_map.shape - h_step = 1.0 / mask_height - w_step = 1.0 / mask_width - - uncertainty_map = uncertainty_map.view(num_rois, - mask_height * mask_width) - num_points = min(mask_height * mask_width, num_points) - point_indices = uncertainty_map.topk(num_points, dim=1)[1] - point_coords = uncertainty_map.new_zeros(num_rois, num_points, 2) - point_coords[:, :, 0] = w_step / 2.0 + (point_indices % - mask_width).float() * w_step - point_coords[:, :, 1] = h_step / 2.0 + (point_indices // - mask_width).float() * h_step - return point_indices, point_coords diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/core/seg/builder.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/core/seg/builder.py deleted file mode 100644 index db61f03d4abb2072f2532ce4429c0842495e015b..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/core/seg/builder.py +++ /dev/null @@ -1,8 +0,0 @@ -from annotator.uniformer.mmcv.utils import Registry, build_from_cfg - -PIXEL_SAMPLERS = Registry('pixel sampler') - - -def build_pixel_sampler(cfg, **default_args): - """Build pixel sampler for segmentation map.""" - return build_from_cfg(cfg, PIXEL_SAMPLERS, default_args) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/fileio/handlers/yaml_handler.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/fileio/handlers/yaml_handler.py deleted file mode 100644 index c5aa2eea1e8c76f8baf753d1c8c959dee665e543..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/fileio/handlers/yaml_handler.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import yaml - -try: - from yaml import CLoader as Loader, CDumper as Dumper -except ImportError: - from yaml import Loader, Dumper - -from .base import BaseFileHandler # isort:skip - - -class YamlHandler(BaseFileHandler): - - def load_from_fileobj(self, file, **kwargs): - kwargs.setdefault('Loader', Loader) - return yaml.load(file, **kwargs) - - def dump_to_fileobj(self, obj, file, **kwargs): - kwargs.setdefault('Dumper', Dumper) - yaml.dump(obj, file, **kwargs) - - def dump_to_str(self, obj, **kwargs): - kwargs.setdefault('Dumper', Dumper) - return yaml.dump(obj, **kwargs) diff --git a/spaces/abyildirim/inst-inpaint/ldm/modules/distributions/distributions.py b/spaces/abyildirim/inst-inpaint/ldm/modules/distributions/distributions.py deleted file mode 100644 index f2b8ef901130efc171aa69742ca0244d94d3f2e9..0000000000000000000000000000000000000000 --- a/spaces/abyildirim/inst-inpaint/ldm/modules/distributions/distributions.py +++ /dev/null @@ -1,92 +0,0 @@ -import torch -import numpy as np - - -class AbstractDistribution: - def sample(self): - raise NotImplementedError() - - def mode(self): - raise NotImplementedError() - - -class DiracDistribution(AbstractDistribution): - def __init__(self, value): - self.value = value - - def sample(self): - return self.value - - def mode(self): - return self.value - - -class DiagonalGaussianDistribution(object): - def __init__(self, parameters, deterministic=False): - self.parameters = parameters - self.mean, self.logvar = torch.chunk(parameters, 2, dim=1) - self.logvar = torch.clamp(self.logvar, -30.0, 20.0) - self.deterministic = deterministic - self.std = torch.exp(0.5 * self.logvar) - self.var = torch.exp(self.logvar) - if self.deterministic: - self.var = self.std = torch.zeros_like(self.mean).to(device=self.parameters.device) - - def sample(self): - x = self.mean + self.std * torch.randn(self.mean.shape).to(device=self.parameters.device) - return x - - def kl(self, other=None): - if self.deterministic: - return torch.Tensor([0.]) - else: - if other is None: - return 0.5 * torch.sum(torch.pow(self.mean, 2) - + self.var - 1.0 - self.logvar, - dim=[1, 2, 3]) - else: - return 0.5 * torch.sum( - torch.pow(self.mean - other.mean, 2) / other.var - + self.var / other.var - 1.0 - self.logvar + other.logvar, - dim=[1, 2, 3]) - - def nll(self, sample, dims=[1,2,3]): - if self.deterministic: - return torch.Tensor([0.]) - logtwopi = np.log(2.0 * np.pi) - return 0.5 * torch.sum( - logtwopi + self.logvar + torch.pow(sample - self.mean, 2) / self.var, - dim=dims) - - def mode(self): - return self.mean - - -def normal_kl(mean1, logvar1, mean2, logvar2): - """ - source: https://github.com/openai/guided-diffusion/blob/27c20a8fab9cb472df5d6bdd6c8d11c8f430b924/guided_diffusion/losses.py#L12 - Compute the KL divergence between two gaussians. - Shapes are automatically broadcasted, so batches can be compared to - scalars, among other use cases. - """ - tensor = None - for obj in (mean1, logvar1, mean2, logvar2): - if isinstance(obj, torch.Tensor): - tensor = obj - break - assert tensor is not None, "at least one argument must be a Tensor" - - # Force variances to be Tensors. Broadcasting helps convert scalars to - # Tensors, but it does not work for torch.exp(). - logvar1, logvar2 = [ - x if isinstance(x, torch.Tensor) else torch.tensor(x).to(tensor) - for x in (logvar1, logvar2) - ] - - return 0.5 * ( - -1.0 - + logvar2 - - logvar1 - + torch.exp(logvar1 - logvar2) - + ((mean1 - mean2) ** 2) * torch.exp(-logvar2) - ) diff --git a/spaces/afmck/stable-diffusion-inpainting-segmentation/app.py b/spaces/afmck/stable-diffusion-inpainting-segmentation/app.py deleted file mode 100644 index 4f7a194c77a3bf54ed8734df0b2f16b387dc1b21..0000000000000000000000000000000000000000 --- a/spaces/afmck/stable-diffusion-inpainting-segmentation/app.py +++ /dev/null @@ -1,239 +0,0 @@ -import io -import requests -import numpy as np -import torch -import os -from PIL import Image -from typing import List, Optional -from functools import reduce -from argparse import ArgumentParser - -import gradio as gr - -from transformers import DetrFeatureExtractor, DetrForSegmentation, DetrConfig -from transformers.models.detr.feature_extraction_detr import rgb_to_id - -from diffusers import StableDiffusionInpaintPipeline, DPMSolverMultistepScheduler - -parser = ArgumentParser() -parser.add_argument('--disable-cuda', action='store_true') -parser.add_argument('--attention-slicing', action='store_true') -args = parser.parse_args() - -auth_token = os.environ.get("READ_TOKEN") -try_cuda = not args.disable_cuda - -torch.inference_mode() -torch.no_grad() - -# Device helper -def get_device(try_cuda=True): - return torch.device('cuda' if try_cuda and torch.cuda.is_available() else 'cpu') - -device = get_device(try_cuda=try_cuda) - -# Load segmentation models -def load_segmentation_models(model_name: str = 'facebook/detr-resnet-50-panoptic'): - feature_extractor = DetrFeatureExtractor.from_pretrained(model_name) - model = DetrForSegmentation.from_pretrained(model_name) - cfg = DetrConfig.from_pretrained(model_name) - - return feature_extractor, model, cfg - -# Load diffusion pipeline -def load_diffusion_pipeline(model_name: str = 'stabilityai/stable-diffusion-2-inpainting'): - return StableDiffusionInpaintPipeline.from_pretrained( - model_name, - revision='fp16', - torch_dtype=torch.float16 if try_cuda and torch.cuda.is_available() else torch.float32, - use_auth_token=auth_token - ) - -def min_pool(x: torch.Tensor, kernel_size: int): - pad_size = (kernel_size - 1) // 2 - return -torch.nn.functional.max_pool2d(-x, kernel_size, (1, 1), padding=pad_size) - -def max_pool(x: torch.Tensor, kernel_size: int): - pad_size = (kernel_size - 1) // 2 - return torch.nn.functional.max_pool2d(x, kernel_size, (1, 1), padding=pad_size) - -# Apply min-max pooling to clean up mask -def clean_mask(mask, max_kernel: int = 23, min_kernel: int = 5): - mask = torch.Tensor(mask[None, None]).float().to(device) - mask = min_pool(mask, min_kernel) - mask = max_pool(mask, max_kernel) - mask = mask.bool().squeeze().cpu().numpy() - return mask - - -feature_extractor, segmentation_model, segmentation_cfg = load_segmentation_models() -pipe = load_diffusion_pipeline() -pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) - -segmentation_model = segmentation_model.to(device) -pipe = pipe.to(device) -if args.attention_slicing: - pipe.enable_attention_slicing() - -# Callback function that runs segmentation and updates CheckboxGroup -def fn_segmentation(image, max_kernel, min_kernel): - inputs = feature_extractor(images=image, return_tensors="pt").to(device) - outputs = segmentation_model(**inputs) - - processed_sizes = torch.as_tensor(inputs["pixel_values"].shape[-2:]).unsqueeze(0) - result = feature_extractor.post_process_panoptic(outputs, processed_sizes)[0] - - panoptic_seg = Image.open(io.BytesIO(result["png_string"])).resize((image.width, image.height)) - panoptic_seg = np.array(panoptic_seg, dtype=np.uint8) - - panoptic_seg_id = rgb_to_id(panoptic_seg) - - raw_masks = [] - for s in result['segments_info']: - m = panoptic_seg_id == s['id'] - raw_masks.append(m.astype(np.uint8) * 255) - - checkbox_choices = [f"{s['id']}:{segmentation_cfg.id2label[s['category_id']]}" for s in result['segments_info']] - - checkbox_group = gr.CheckboxGroup.update( - choices=checkbox_choices - ) - - return raw_masks, checkbox_group, gr.Image.update(value=np.zeros((image.height, image.width))), gr.Image.update(value=image) - -# Callback function that updates the displayed mask based on selected checkboxes -def fn_update_mask( - image: Image, - masks: List[np.array], - masks_enabled: List[int], - max_kernel: int, - min_kernel: int, - invert_mask: bool - ): - masks_enabled = [int(m.split(':')[0]) for m in masks_enabled] - combined_mask = reduce(lambda x, y: x | y, [masks[i] for i in masks_enabled], np.zeros_like(masks[0], dtype=bool)) - - if invert_mask: - combined_mask = ~combined_mask - - combined_mask = clean_mask(combined_mask, max_kernel, min_kernel) - - masked_image = np.array(image).copy() - masked_image[combined_mask] = 0.0 - - return combined_mask.astype(np.uint8) * 255, Image.fromarray(masked_image) - -# Callback function that runs diffusion given the current image, mask and prompt. -def fn_diffusion( - prompt: str, - masked_image: Image, - mask: Image, - num_diffusion_steps: int, - guidance_scale: float, - negative_prompt: Optional[str] = None, - ): - if len(negative_prompt) == 0: - negative_prompt = None - - # Resize image to a more stable diffusion friendly format. - # TODO: remove magic number - STABLE_DIFFUSION_SMALL_EDGE = 512 - - w, h = masked_image.size - is_width_larger = w > h - resize_ratio = STABLE_DIFFUSION_SMALL_EDGE / (h if is_width_larger else w) - - new_width = int(w * resize_ratio) if is_width_larger else STABLE_DIFFUSION_SMALL_EDGE - new_height = STABLE_DIFFUSION_SMALL_EDGE if is_width_larger else int(h * resize_ratio) - - new_width += 8 - (new_width % 8) if is_width_larger else 0 - new_height += 0 if is_width_larger else 8 - (new_height % 8) - - mask = Image.fromarray(mask).convert("RGB").resize((new_width, new_height)) - masked_image = masked_image.convert("RGB").resize((new_width, new_height)) - - # Run diffusion - inpainted_image = pipe( - height=new_height, - width=new_width, - prompt=prompt, - image=masked_image, - mask_image=mask, - num_inference_steps=num_diffusion_steps, - guidance_scale=guidance_scale, - negative_prompt=negative_prompt - ).images[0] - - # Resize back to the original size - inpainted_image = inpainted_image.resize((w, h)) - - return inpainted_image - -demo = gr.Blocks(css=open('app.css').read()) - -with demo: - gr.HTML(open('app_header.html').read()) - - if not try_cuda or not torch.cuda.is_available(): - gr.HTML('<div class="alert alert-warning" role="alert" style="color:red"><b>Warning: GPU not available! Diffusion will be slow.</b></div>') - - # Input image control - input_image = gr.Image(value="example.png", type='pil', label="Input Image") - # Combined mask controls - bt_masks = gr.Button("Compute Masks") - with gr.Row(): - mask_image = gr.Image(type='numpy', label="Diffusion Mask") - masked_image = gr.Image(type='pil', label="Masked Image") - mask_storage = gr.State() - - # Mask editing controls - with gr.Row(): - max_slider = gr.Slider(minimum=1, maximum=99, value=23, step=2, label="Mask Overflow") - min_slider = gr.Slider(minimum=1, maximum=99, value=5, step=2, label="Mask Denoising") - - with gr.Row(): - invert_mask = gr.Checkbox(label="Invert Mask") - mask_checkboxes = gr.CheckboxGroup(interactive=True, label="Mask Selection") - - # Diffusion controls and output - with gr.Row(): - with gr.Column(): - prompt = gr.Textbox("An angry dog floating in outer deep space. Twinkling stars in the background. High definition.", label="Prompt") - negative_prompt = gr.Textbox(label="Negative Prompt") - with gr.Column(): - steps_slider = gr.Slider(minimum=1, maximum=100, value=50, label="Inference Steps") - guidance_slider = gr.Slider(minimum=0.0, maximum=50.0, value=7.5, step=0.1, label="Guidance Scale") - bt_diffusion = gr.Button("Run Diffusion") - - inpainted_image = gr.Image(type='pil', label="Inpainted Image") - - # TODO: saw a better way of handling many inputs online.. - # forgot where though - update_mask_inputs = [input_image, mask_storage, mask_checkboxes, max_slider, min_slider, invert_mask] - update_mask_outputs = [mask_image, masked_image] - - # Clear checkbox group on input image change - input_image.change(lambda: gr.CheckboxGroup.update(choices=[], value=[]), outputs=mask_checkboxes) - input_image.change(lambda: gr.Checkbox.update(value=False), outputs=invert_mask) - - # Segmentation button callback - bt_masks.click(fn_segmentation, inputs=[input_image, max_slider, min_slider], outputs=[mask_storage, mask_checkboxes, mask_image, masked_image]) - - # Update mask callbacks - max_slider.change(fn_update_mask, inputs=update_mask_inputs, outputs=update_mask_outputs, show_progress=False) - min_slider.change(fn_update_mask, inputs=update_mask_inputs, outputs=update_mask_outputs, show_progress=False) - mask_checkboxes.change(fn_update_mask, inputs=update_mask_inputs, outputs=update_mask_outputs, show_progress=False) - invert_mask.change(fn_update_mask, inputs=update_mask_inputs, outputs=update_mask_outputs, show_progress=False) - - # Diffusion button callback - bt_diffusion.click(fn_diffusion, inputs=[ - prompt, - masked_image, - mask_image, - steps_slider, - guidance_slider, - negative_prompt - ], outputs=inpainted_image) - gr.HTML(open('app_license.html').read()) - -demo.launch() diff --git a/spaces/agueroooooooooo/Transport_Mode_Detector/README.md b/spaces/agueroooooooooo/Transport_Mode_Detector/README.md deleted file mode 100644 index 4f52d56cc49de29a02f653519ab264b8affabf17..0000000000000000000000000000000000000000 --- a/spaces/agueroooooooooo/Transport_Mode_Detector/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Transport_Mode_Detector -emoji: 🚀 -colorFrom: purple -colorTo: yellow -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/akhaliq/GFPGAN/app.py b/spaces/akhaliq/GFPGAN/app.py deleted file mode 100644 index d9d96de955246af5338eb7187dc55c46732d45af..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/GFPGAN/app.py +++ /dev/null @@ -1,116 +0,0 @@ -import os - -import cv2 -import gradio as gr -import torch -from basicsr.archs.srvgg_arch import SRVGGNetCompact -from gfpgan.utils import GFPGANer -from realesrgan.utils import RealESRGANer - -os.system("pip freeze") -os.system("wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth -P .") -os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.2.pth -P .") -os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.3.pth -P .") - -torch.hub.download_url_to_file( - 'https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', - 'lincoln.jpg') -torch.hub.download_url_to_file( - 'https://user-images.githubusercontent.com/17445847/187400315-87a90ac9-d231-45d6-b377-38702bd1838f.jpg', - 'AI-generate.jpg') -torch.hub.download_url_to_file( - 'https://user-images.githubusercontent.com/17445847/187400981-8a58f7a4-ef61-42d9-af80-bc6234cef860.jpg', - 'Blake_Lively.jpg') -torch.hub.download_url_to_file( - 'https://user-images.githubusercontent.com/17445847/187401133-8a3bf269-5b4d-4432-b2f0-6d26ee1d3307.png', - '10045.png') - -# background enhancer with RealESRGAN -model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=32, upscale=4, act_type='prelu') -model_path = 'realesr-general-x4v3.pth' -half = True if torch.cuda.is_available() else False -upsampler = RealESRGANer(scale=4, model_path=model_path, model=model, tile=0, tile_pad=10, pre_pad=0, half=half) - -# Use GFPGAN for face enhancement -face_enhancer_v3 = GFPGANer( - model_path='GFPGANv1.3.pth', upscale=2, arch='clean', channel_multiplier=2, bg_upsampler=upsampler) -face_enhancer_v2 = GFPGANer( - model_path='GFPGANv1.2.pth', upscale=2, arch='clean', channel_multiplier=2, bg_upsampler=upsampler) -os.makedirs('output', exist_ok=True) - - -def inference(img, version, scale): - print(img, version, scale) - try: - img = cv2.imread(img, cv2.IMREAD_UNCHANGED) - if len(img.shape) == 3 and img.shape[2] == 4: - img_mode = 'RGBA' - else: - img_mode = None - - h, w = img.shape[0:2] - if h < 300: - img = cv2.resize(img, (w * 2, h * 2), interpolation=cv2.INTER_LANCZOS4) - - if version == 'v1.2': - face_enhancer = face_enhancer_v2 - else: - face_enhancer = face_enhancer_v3 - try: - _, _, output = face_enhancer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True) - except RuntimeError as error: - print('Error', error) - else: - extension = 'png' - - try: - if scale != 2: - interpolation = cv2.INTER_AREA if scale < 2 else cv2.INTER_LANCZOS4 - h, w = img.shape[0:2] - output = cv2.resize(output, (int(w * scale / 2), int(h * scale / 2)), interpolation=interpolation) - except Exception as error: - print('wrong scale input.', error) - if img_mode == 'RGBA': # RGBA images should be saved in png format - extension = 'png' - else: - extension = 'jpg' - save_path = f'output/out.{extension}' - cv2.imwrite(save_path, output) - - output = cv2.cvtColor(output, cv2.COLOR_BGR2RGB) - return output, save_path - except Exception as error: - print('global exception', error) - return None, None - - -title = "GFPGAN: Practical Face Restoration Algorithm UPDATE: Space for GFPGAN is now at <a href='https://huggingface.co/spaces/Xintao/GFPGAN' target='_blank'>https://huggingface.co/spaces/Xintao/GFPGAN</a>" -description = r"""Gradio demo for <a href='https://github.com/TencentARC/GFPGAN' target='_blank'><b>GFPGAN: Towards Real-World Blind Face Restoration with Generative Facial Prior</b></a>.<br> -It can be used to restore your **old photos** or improve **AI-generated faces**.<br> -To use it, simply upload your image.<br> -If GFPGAN is helpful, please help to ⭐ the <a href='https://github.com/TencentARC/GFPGAN' target='_blank'>Github Repo</a> and recommend it to your friends 😊 -""" -article = r""" - -[![download](https://img.shields.io/github/downloads/TencentARC/GFPGAN/total.svg)](https://github.com/TencentARC/GFPGAN/releases) -[![GitHub Stars](https://img.shields.io/github/stars/TencentARC/GFPGAN?style=social)](https://github.com/TencentARC/GFPGAN) -[![arXiv](https://img.shields.io/badge/arXiv-Paper-<COLOR>.svg)](https://arxiv.org/abs/2101.04061) - -If you have any question, please email 📧 `xintao.wang@outlook.com` or `xintaowang@tencent.com`. - -<center><img src='https://visitor-badge.glitch.me/badge?page_id=akhaliq_GFPGAN' alt='visitor badge'></center> -""" -gr.Interface( - inference, [ - gr.inputs.Image(type="filepath", label="Input"), - gr.inputs.Radio(['v1.2', 'v1.3'], type="value", default='v1.3', label='GFPGAN version'), - gr.inputs.Number(label="Rescaling factor", default=2) - ], [ - gr.outputs.Image(type="numpy", label="Output (The whole image)"), - gr.outputs.File(label="Download the output image") - ], - title=title, - description=description, - article=article, - examples=[['AI-generate.jpg', 'v1.3', 2], ['lincoln.jpg', 'v1.3', 2], ['Blake_Lively.jpg', 'v1.3', 2], - ['10045.png', 'v1.3', 2]]).launch() diff --git a/spaces/akhaliq/Kapao/utils/loggers/wandb/__init__.py b/spaces/akhaliq/Kapao/utils/loggers/wandb/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/akhaliq/pedalboard/README.md b/spaces/akhaliq/pedalboard/README.md deleted file mode 100644 index 60b43a984055a5c28a49ef2aac10d9374583e8ea..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/pedalboard/README.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -title: Pedalboard -emoji: 🐠 -colorFrom: blue -colorTo: pink -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/alaka/tinder-data-explorer/tinder_plots.py b/spaces/alaka/tinder-data-explorer/tinder_plots.py deleted file mode 100644 index 710ed7fe194b4954f6f86239ea5c55d41668e8a4..0000000000000000000000000000000000000000 --- a/spaces/alaka/tinder-data-explorer/tinder_plots.py +++ /dev/null @@ -1,110 +0,0 @@ -import json -from lib2to3.pgen2.pgen import DFAState -import plotly.express as px -import pandas as pd -from tqdm.auto import tqdm -import plotly.graph_objects as go -import tempfile -import copy -import zipfile -import os - -TEMPLATE_PLOT = None # "ggplot2" - - -def get_usage_df(file): - if isinstance(file, tempfile._TemporaryFileWrapper): - file = file.name - - if file.endswith(".zip"): - with tempfile.TemporaryDirectory() as tmp_dir: - with zipfile.ZipFile(file, "r") as zip_ref: - zip_ref.extractall(tmp_dir) - file = os.path.join(tmp_dir, "data.json") - with open(file) as f: - data = json.load(f) - else: - with open(file) as f: - data = json.load(f) - - new_data = {} - new_data["date"] = [] - new_data["value"] = [] - new_data["raw_value"] = [] - new_data["kind"] = [] - for k in data["Usage"]: - if k in ["idfa", "advertising_id"]: - continue - new_value = True - for date, v in data["Usage"][k].items(): - new_data["date"].append(date) - new_data["kind"].append(k) - new_data["raw_value"].append(v) - if len(new_data["value"]) > 0 and not new_value: - v += new_data["value"][-1] - new_data["value"].append(v) - new_value = False - df = pd.DataFrame(new_data) - return df - - -def tinder_usage_overview(df): - fig = px.scatter( - df, - "date", - "value", - color="kind", - symbol="id", - opacity=0.8, - template=TEMPLATE_PLOT, - labels={"x": "date", "y": "Cumulative quantity"}, - ) - return fig - - -def tinder_seasonality(df): - df = _separate_df_columns(df) - tmp = pd.to_datetime(df["date"]) - - # create season from date - season = (tmp.dt.month - 1) // 3 - df["season"] = season - df["season"][df["season"] == 0] = "winter" - df["season"][df["season"] == 1] = "spring" - df["season"][df["season"] == 2] = "summer" - df["season"][df["season"] == 3] = "fall" - - fig = px.histogram( - df, - x="date", - y="matches", - color="season", - pattern_shape="id", - template=TEMPLATE_PLOT, - labels={"x": "date", "y": "Matches"}, - ) - return fig - - -def _separate_df_columns(df): - df_orig = copy.deepcopy(df) - df = copy.deepcopy(df) - df = df[df["kind"] == "matches"] - df["matches"] = df["raw_value"].values - df["swipes_likes"] = df_orig["raw_value"][df_orig["kind"] == "swipes_likes"].values - return df - - -def tinder_colored_matches(df): - df = _separate_df_columns(df) - fig = px.scatter( - df, - x="date", - y="matches", - symbol="id", - hover_data=["swipes_likes"], - color="swipes_likes", - template=TEMPLATE_PLOT, - labels={"x": "date", "y": "Matches"}, - ) - return fig diff --git a/spaces/albarji/mixture-of-diffusers/app.py b/spaces/albarji/mixture-of-diffusers/app.py deleted file mode 100644 index 241369ee977f2339e8947556ae6c74d1dfa19214..0000000000000000000000000000000000000000 --- a/spaces/albarji/mixture-of-diffusers/app.py +++ /dev/null @@ -1,196 +0,0 @@ -import gradio as gr -import torch - -from diffusers import LMSDiscreteScheduler -from mixdiff import StableDiffusionCanvasPipeline, Text2ImageRegion - -article = """ -## Usage - -In this demo you can use Mixture of Diffusers to configure a canvas made up of 3 diffusion regions. Play around with the prompts and guidance values in each region! You can also change increment the overlap between regions if seams appear in the image. - -In the full version of Mixture of Diffusers you will find further freedom to configure the regions in the canvas. Check the [github repo](https://github.com/albarji/mixture-of-diffusers)! - -## Motivation - -Current image generation methods, such as Stable Diffusion, struggle to position objects at specific locations. While the content of the generated image (somewhat) reflects the objects present in the prompt, it is difficult to frame the prompt in a way that creates an specific composition. For instance, take a prompt expressing a complex composition such as - -> A charming house in the countryside on the left, -> in the center a dirt road in the countryside crossing pastures, -> on the right an old and rusty giant robot lying on a dirt road, -> by jakub rozalski, -> sunset lighting on the left and center, dark sunset lighting on the right -> elegant, highly detailed, smooth, sharp focus, artstation, stunning masterpiece - -Out of a sample of 20 Stable Diffusion generations with different seeds, the generated images that align best with the prompt are the following: - -<table> - <tr> - <td><img src="https://user-images.githubusercontent.com/9654655/195373001-ad23b7c4-f5b1-4e5b-9aa1-294441ed19ed.png" width="300"></td> - <td><img src="https://user-images.githubusercontent.com/9654655/195373174-8d85dd96-310e-48fa-b112-d9902685f22e.png" width="300"></td> - <td><img src="https://user-images.githubusercontent.com/9654655/195373200-59eeec1e-e1b8-464d-b72e-e28a9004d269.png" width="300"></td> - </tr> -</table> - -The method proposed here strives to provide a better tool for image composition by using several diffusion processes in parallel, each configured with a specific prompt and settings, and focused on a particular region of the image. You can try it out in the example above! The mixture of diffusion processes is done in a way that harmonizes the generation process, preventing "seam" effects in the generated image. - -Using several diffusion processes in parallel has also practical advantages when generating very large images, as the GPU memory requirements are similar to that of generating an image of the size of a single tile. - -## Responsible use - -The same recommendations as in Stable Diffusion apply, so please check the corresponding [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4). - -More broadly speaking, always bear this in mind: YOU are responsible for the content you create using this tool. Do not fully blame, credit, or place the responsibility on the software. - -## Gallery - -Here are some relevant illustrations created using this software (and putting quite a few hours into them!). - -### Darkness Dawning - -![Darkness Dawning](https://images-wixmp-ed30a86b8c4ca887773594c2.wixmp.com/f/cd1358aa-80d5-4c59-b95b-cdfde5dcc4f5/dfidq8n-6da9a886-9f1c-40ae-8341-d77af9552395.png?token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ1cm46YXBwOjdlMGQxODg5ODIyNjQzNzNhNWYwZDQxNWVhMGQyNmUwIiwiaXNzIjoidXJuOmFwcDo3ZTBkMTg4OTgyMjY0MzczYTVmMGQ0MTVlYTBkMjZlMCIsIm9iaiI6W1t7InBhdGgiOiJcL2ZcL2NkMTM1OGFhLTgwZDUtNGM1OS1iOTViLWNkZmRlNWRjYzRmNVwvZGZpZHE4bi02ZGE5YTg4Ni05ZjFjLTQwYWUtODM0MS1kNzdhZjk1NTIzOTUucG5nIn1dXSwiYXVkIjpbInVybjpzZXJ2aWNlOmZpbGUuZG93bmxvYWQiXX0.ff6XoVBPdUbcTLcuHUpQMPrD2TaXBM_s6HfRhsARDw0) - -### Yog-Sothoth - -![Yog-Sothoth](https://images-wixmp-ed30a86b8c4ca887773594c2.wixmp.com/f/cd1358aa-80d5-4c59-b95b-cdfde5dcc4f5/dfidsq4-174dd428-2c5a-48f6-a78f-9441fb3cffea.png?token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ1cm46YXBwOjdlMGQxODg5ODIyNjQzNzNhNWYwZDQxNWVhMGQyNmUwIiwiaXNzIjoidXJuOmFwcDo3ZTBkMTg4OTgyMjY0MzczYTVmMGQ0MTVlYTBkMjZlMCIsIm9iaiI6W1t7InBhdGgiOiJcL2ZcL2NkMTM1OGFhLTgwZDUtNGM1OS1iOTViLWNkZmRlNWRjYzRmNVwvZGZpZHNxNC0xNzRkZDQyOC0yYzVhLTQ4ZjYtYTc4Zi05NDQxZmIzY2ZmZWEucG5nIn1dXSwiYXVkIjpbInVybjpzZXJ2aWNlOmZpbGUuZG93bmxvYWQiXX0.X42zWgsk3lYnYwuEgkifRFRH2km-npHvrdleDN3m6bA) - -### Looking through the eyes of giants - -![Looking through the eyes of giants](https://user-images.githubusercontent.com/9654655/218307148-95ce88b6-b2a3-458d-b469-daf5bd56e3a7.jpg) - -[Follow me on DeviantArt for more!](https://www.deviantart.com/albarji) - -## Acknowledgements - -First and foremost, my most sincere appreciation for the [Stable Diffusion team](https://stability.ai/blog/stable-diffusion-public-release) for releasing such an awesome model, and for letting me take part of the closed beta. Kudos also to the Hugging Face community and developers for implementing the [Diffusers library](https://github.com/huggingface/diffusers). - -Thanks to Hugging Face for providing support and a GPU spaces for running this demo. Thanks also to Instituto de Ingeniería del Conocimiento and Grupo de Aprendizaje Automático (Universidad Autónoma de Madrid) for providing GPU resources for testing and experimenting this library. - -Thanks also to the vibrant communities of the Stable Diffusion discord channel and [Lexica](https://lexica.art/), where I have learned about many amazing artists and styles. And to my friend Abril for sharing many tips on cool artists! -""" - - -# Creater scheduler and model (similar to StableDiffusionPipeline) -scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000) -pipeline = StableDiffusionCanvasPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=scheduler).to("cuda" if torch.cuda.is_available() else "cpu") - -def generate(prompt1, prompt2, prompt3, gc1, gc2, gc3, overlap, steps, seed): - """Mixture of Diffusers generation""" - tile_width = 640 - tile_height = 640 - return pipeline( - canvas_height=tile_height, - canvas_width=tile_width + (tile_width - overlap) * 2, - regions=[ - Text2ImageRegion(0, tile_height, 0, tile_width, guidance_scale=gc1, - prompt=prompt1), - Text2ImageRegion(0, tile_height, tile_width - overlap, tile_width - overlap + tile_width, guidance_scale=gc2, - prompt=prompt2), - Text2ImageRegion(0, tile_height, (tile_width - overlap) * 2, (tile_width - overlap) * 2 + tile_width, guidance_scale=gc3, - prompt=prompt3), - ], - num_inference_steps=steps, - seed=seed, - )["sample"][0] - -with gr.Blocks(title="Mixture of Diffusers") as demo: - gr.HTML( - """ - <div style="text-align: center; max-width: 700px; margin: 0 auto;"> - <div - style=" - display: inline-flex; - align-items: center; - gap: 0.8rem; - font-size: 1.75rem; - " - > - <h1 style="font-weight: 1000; margin-bottom: 7px; line-height: normal;"> - Mixture of Diffusers - </h1> - </div> - <p style="margin-bottom: 10px; font-size: 94%"> - <a href="https://arxiv.org/abs/2302.02412">[Paper]</a> <a href="https://github.com/albarji/mixture-of-diffusers">[Code in Github]</a> <a href="https://huggingface.co/spaces/albarji/mixture-of-diffusers?duplicate=true"> - </p> - </div> - """ - ) - gr.HTML(""" - <p>For faster inference without waiting in queue, you may duplicate the space and upgrade to GPU in settings. - <br/> - <a href="https://huggingface.co/spaces/albarji/mixture-of-diffusers?duplicate=true"> - <img style="margin-top: 0em; margin-bottom: 0em" src="https://bit.ly/3gLdBN6" alt="Duplicate Space"></a> - <p/> - """) - with gr.Row(): - with gr.Column(scale=1): - gr.Markdown("### Left region") - left_prompt = gr.Textbox(lines=2, label="Prompt (what do you want to see in the left side of the image?)") - left_gs = gr.Slider(minimum=0, maximum=15, value=8, step=1, label="Guidance scale") - with gr.Column(scale=1): - gr.Markdown("### Center region") - center_prompt = gr.Textbox(lines=2, label="Prompt (what do you want to see in the center of the image?)") - center_gs = gr.Slider(minimum=0, maximum=15, value=8, step=1, label="Guidance scale") - with gr.Column(scale=1): - gr.Markdown("### Right region") - right_prompt = gr.Textbox(lines=2, label="Prompt (what do you want to see in the right side of the image?)") - right_gs = gr.Slider(minimum=0, maximum=15, value=8, step=1, label="Guidance scale") - gr.Markdown("### General parameters") - with gr.Row(): - with gr.Column(scale=1): - overlap = gr.Slider(minimum=128, maximum=320, value=256, step=8, label="Overlap between diffusion regions") - with gr.Column(scale=1): - steps = gr.Slider(minimum=1, maximum=50, value=15, step=1, label="Number of diffusion steps") - with gr.Column(scale=1): - seed = gr.Number(value=12345, precision=0, label="Random seed") - with gr.Row(): - button = gr.Button(value="Generate") - with gr.Row(): - output = gr.Image(label="Generated image") - with gr.Row(): - gr.Examples( - examples=[ - [ - "A charming house in the countryside, by jakub rozalski, sunset lighting, elegant, highly detailed, smooth, sharp focus, artstation, stunning masterpiece", - "A dirt road in the countryside crossing pastures, by jakub rozalski, sunset lighting, elegant, highly detailed, smooth, sharp focus, artstation, stunning masterpiece", - "An old and rusty giant robot lying on a dirt road, by jakub rozalski, dark sunset lighting, elegant, highly detailed, smooth, sharp focus, artstation, stunning masterpiece", - 8, 8, 8, - 256, - 50, - 7178915308 - ], - [ - "Abstract decorative illustration, by joan miro and gustav klimt and marlina vera and loish, elegant, intricate, highly detailed, smooth, sharp focus, vibrant colors, artstation, stunning masterpiece", - "Abstract decorative illustration, by joan miro and gustav klimt and marlina vera and loish, elegant, intricate, highly detailed, smooth, sharp focus, vibrant colors, artstation, stunning masterpiece", - "Abstract decorative illustration, by joan miro and gustav klimt and marlina vera and loish, elegant, intricate, highly detailed, smooth, sharp focus, vibrant colors, artstation, stunning masterpiece", - 8, 8, 8, - 256, - 35, - 21156517 - ], - [ - "Magical diagrams and runes written with chalk on a blackboard, elegant, intricate, highly detailed, smooth, sharp focus, artstation, stunning masterpiece", - "Magical diagrams and runes written with chalk on a blackboard, elegant, intricate, highly detailed, smooth, sharp focus, artstation, stunning masterpiece", - "Magical diagrams and runes written with chalk on a blackboard, elegant, intricate, highly detailed, smooth, sharp focus, artstation, stunning masterpiece", - 12, 12, 12, - 256, - 35, - 12591765619 - ] - ], - inputs=[left_prompt, center_prompt, right_prompt, left_gs, center_gs, right_gs, overlap, steps, seed], - outputs=output, - fn=generate, - cache_examples=True - ) - - button.click( - generate, - inputs=[left_prompt, center_prompt, right_prompt, left_gs, center_gs, right_gs, overlap, steps, seed], - outputs=output - ) - - with gr.Row(): - gr.Markdown(article) - -demo.launch(server_name="0.0.0.0") diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/metadata/pkg_resources.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/metadata/pkg_resources.py deleted file mode 100644 index d39f0ba31da5342b7d067ba2e1e92f2d6e79afdb..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/metadata/pkg_resources.py +++ /dev/null @@ -1,256 +0,0 @@ -import email.message -import email.parser -import logging -import os -import pathlib -import zipfile -from typing import Collection, Iterable, Iterator, List, Mapping, NamedTuple, Optional - -from pip._vendor import pkg_resources -from pip._vendor.packaging.requirements import Requirement -from pip._vendor.packaging.utils import NormalizedName, canonicalize_name -from pip._vendor.packaging.version import parse as parse_version - -from pip._internal.exceptions import InvalidWheel, NoneMetadataError, UnsupportedWheel -from pip._internal.utils.misc import display_path -from pip._internal.utils.wheel import parse_wheel, read_wheel_metadata_file - -from .base import ( - BaseDistribution, - BaseEntryPoint, - BaseEnvironment, - DistributionVersion, - InfoPath, - Wheel, -) - -logger = logging.getLogger(__name__) - - -class EntryPoint(NamedTuple): - name: str - value: str - group: str - - -class WheelMetadata: - """IMetadataProvider that reads metadata files from a dictionary. - - This also maps metadata decoding exceptions to our internal exception type. - """ - - def __init__(self, metadata: Mapping[str, bytes], wheel_name: str) -> None: - self._metadata = metadata - self._wheel_name = wheel_name - - def has_metadata(self, name: str) -> bool: - return name in self._metadata - - def get_metadata(self, name: str) -> str: - try: - return self._metadata[name].decode() - except UnicodeDecodeError as e: - # Augment the default error with the origin of the file. - raise UnsupportedWheel( - f"Error decoding metadata for {self._wheel_name}: {e} in {name} file" - ) - - def get_metadata_lines(self, name: str) -> Iterable[str]: - return pkg_resources.yield_lines(self.get_metadata(name)) - - def metadata_isdir(self, name: str) -> bool: - return False - - def metadata_listdir(self, name: str) -> List[str]: - return [] - - def run_script(self, script_name: str, namespace: str) -> None: - pass - - -class Distribution(BaseDistribution): - def __init__(self, dist: pkg_resources.Distribution) -> None: - self._dist = dist - - @classmethod - def from_directory(cls, directory: str) -> "Distribution": - dist_dir = directory.rstrip(os.sep) - - # Build a PathMetadata object, from path to metadata. :wink: - base_dir, dist_dir_name = os.path.split(dist_dir) - metadata = pkg_resources.PathMetadata(base_dir, dist_dir) - - # Determine the correct Distribution object type. - if dist_dir.endswith(".egg-info"): - dist_cls = pkg_resources.Distribution - dist_name = os.path.splitext(dist_dir_name)[0] - else: - assert dist_dir.endswith(".dist-info") - dist_cls = pkg_resources.DistInfoDistribution - dist_name = os.path.splitext(dist_dir_name)[0].split("-")[0] - - dist = dist_cls(base_dir, project_name=dist_name, metadata=metadata) - return cls(dist) - - @classmethod - def from_wheel(cls, wheel: Wheel, name: str) -> "Distribution": - """Load the distribution from a given wheel. - - :raises InvalidWheel: Whenever loading of the wheel causes a - :py:exc:`zipfile.BadZipFile` exception to be thrown. - :raises UnsupportedWheel: If the wheel is a valid zip, but malformed - internally. - """ - try: - with wheel.as_zipfile() as zf: - info_dir, _ = parse_wheel(zf, name) - metadata_text = { - path.split("/", 1)[-1]: read_wheel_metadata_file(zf, path) - for path in zf.namelist() - if path.startswith(f"{info_dir}/") - } - except zipfile.BadZipFile as e: - raise InvalidWheel(wheel.location, name) from e - except UnsupportedWheel as e: - raise UnsupportedWheel(f"{name} has an invalid wheel, {e}") - dist = pkg_resources.DistInfoDistribution( - location=wheel.location, - metadata=WheelMetadata(metadata_text, wheel.location), - project_name=name, - ) - return cls(dist) - - @property - def location(self) -> Optional[str]: - return self._dist.location - - @property - def info_location(self) -> Optional[str]: - return self._dist.egg_info - - @property - def installed_by_distutils(self) -> bool: - # A distutils-installed distribution is provided by FileMetadata. This - # provider has a "path" attribute not present anywhere else. Not the - # best introspection logic, but pip has been doing this for a long time. - try: - return bool(self._dist._provider.path) - except AttributeError: - return False - - @property - def canonical_name(self) -> NormalizedName: - return canonicalize_name(self._dist.project_name) - - @property - def version(self) -> DistributionVersion: - return parse_version(self._dist.version) - - def is_file(self, path: InfoPath) -> bool: - return self._dist.has_metadata(str(path)) - - def iterdir(self, path: InfoPath) -> Iterator[pathlib.PurePosixPath]: - name = str(path) - if not self._dist.has_metadata(name): - raise FileNotFoundError(name) - if not self._dist.isdir(name): - raise NotADirectoryError(name) - for child in self._dist.metadata_listdir(name): - yield pathlib.PurePosixPath(path, child) - - def read_text(self, path: InfoPath) -> str: - name = str(path) - if not self._dist.has_metadata(name): - raise FileNotFoundError(name) - content = self._dist.get_metadata(name) - if content is None: - raise NoneMetadataError(self, name) - return content - - def iter_entry_points(self) -> Iterable[BaseEntryPoint]: - for group, entries in self._dist.get_entry_map().items(): - for name, entry_point in entries.items(): - name, _, value = str(entry_point).partition("=") - yield EntryPoint(name=name.strip(), value=value.strip(), group=group) - - @property - def metadata(self) -> email.message.Message: - """ - :raises NoneMetadataError: if the distribution reports `has_metadata()` - True but `get_metadata()` returns None. - """ - if isinstance(self._dist, pkg_resources.DistInfoDistribution): - metadata_name = "METADATA" - else: - metadata_name = "PKG-INFO" - try: - metadata = self.read_text(metadata_name) - except FileNotFoundError: - if self.location: - displaying_path = display_path(self.location) - else: - displaying_path = repr(self.location) - logger.warning("No metadata found in %s", displaying_path) - metadata = "" - feed_parser = email.parser.FeedParser() - feed_parser.feed(metadata) - return feed_parser.close() - - def iter_dependencies(self, extras: Collection[str] = ()) -> Iterable[Requirement]: - if extras: # pkg_resources raises on invalid extras, so we sanitize. - extras = frozenset(extras).intersection(self._dist.extras) - return self._dist.requires(extras) - - def iter_provided_extras(self) -> Iterable[str]: - return self._dist.extras - - -class Environment(BaseEnvironment): - def __init__(self, ws: pkg_resources.WorkingSet) -> None: - self._ws = ws - - @classmethod - def default(cls) -> BaseEnvironment: - return cls(pkg_resources.working_set) - - @classmethod - def from_paths(cls, paths: Optional[List[str]]) -> BaseEnvironment: - return cls(pkg_resources.WorkingSet(paths)) - - def _search_distribution(self, name: str) -> Optional[BaseDistribution]: - """Find a distribution matching the ``name`` in the environment. - - This searches from *all* distributions available in the environment, to - match the behavior of ``pkg_resources.get_distribution()``. - """ - canonical_name = canonicalize_name(name) - for dist in self.iter_distributions(): - if dist.canonical_name == canonical_name: - return dist - return None - - def get_distribution(self, name: str) -> Optional[BaseDistribution]: - # Search the distribution by looking through the working set. - dist = self._search_distribution(name) - if dist: - return dist - - # If distribution could not be found, call working_set.require to - # update the working set, and try to find the distribution again. - # This might happen for e.g. when you install a package twice, once - # using setup.py develop and again using setup.py install. Now when - # running pip uninstall twice, the package gets removed from the - # working set in the first uninstall, so we have to populate the - # working set again so that pip knows about it and the packages gets - # picked up and is successfully uninstalled the second time too. - try: - # We didn't pass in any version specifiers, so this can never - # raise pkg_resources.VersionConflict. - self._ws.require(name) - except pkg_resources.DistributionNotFound: - return None - return self._search_distribution(name) - - def _iter_distributions(self) -> Iterator[BaseDistribution]: - for dist in self._ws: - yield Distribution(dist) diff --git a/spaces/allknowingroger/Image-Models-Test165/README.md b/spaces/allknowingroger/Image-Models-Test165/README.md deleted file mode 100644 index f91e4b31ab345f987b425de029c057bfb69d9e1b..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test165/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: More Image Models -emoji: 😻 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: true -duplicated_from: allknowingroger/Image-Models-Test ---- - -<!--Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference--> \ No newline at end of file diff --git a/spaces/allknowingroger/huggingface/assets/index-d15ae4ce.js b/spaces/allknowingroger/huggingface/assets/index-d15ae4ce.js deleted file mode 100644 index 20ce5614269312627323032880fbe8d62916a8d1..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/huggingface/assets/index-d15ae4ce.js +++ /dev/null @@ -1,41 +0,0 @@ -var Dc=Object.defineProperty;var $c=(e,t,n)=>t in e?Dc(e,t,{enumerable:!0,configurable:!0,writable:!0,value:n}):e[t]=n;var yn=(e,t,n)=>($c(e,typeof t!="symbol"?t+"":t,n),n);(function(){const t=document.createElement("link").relList;if(t&&t.supports&&t.supports("modulepreload"))return;for(const l of document.querySelectorAll('link[rel="modulepreload"]'))r(l);new MutationObserver(l=>{for(const i of l)if(i.type==="childList")for(const o of i.addedNodes)o.tagName==="LINK"&&o.rel==="modulepreload"&&r(o)}).observe(document,{childList:!0,subtree:!0});function n(l){const i={};return l.integrity&&(i.integrity=l.integrity),l.referrerPolicy&&(i.referrerPolicy=l.referrerPolicy),l.crossOrigin==="use-credentials"?i.credentials="include":l.crossOrigin==="anonymous"?i.credentials="omit":i.credentials="same-origin",i}function r(l){if(l.ep)return;l.ep=!0;const i=n(l);fetch(l.href,i)}})();var bu={exports:{}},ul={},es={exports:{}},I={};/** - * @license React - * react.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var tr=Symbol.for("react.element"),Uc=Symbol.for("react.portal"),Vc=Symbol.for("react.fragment"),Bc=Symbol.for("react.strict_mode"),Qc=Symbol.for("react.profiler"),Hc=Symbol.for("react.provider"),Wc=Symbol.for("react.context"),Kc=Symbol.for("react.forward_ref"),Yc=Symbol.for("react.suspense"),Xc=Symbol.for("react.memo"),Gc=Symbol.for("react.lazy"),Qo=Symbol.iterator;function Zc(e){return e===null||typeof e!="object"?null:(e=Qo&&e[Qo]||e["@@iterator"],typeof e=="function"?e:null)}var ts={isMounted:function(){return!1},enqueueForceUpdate:function(){},enqueueReplaceState:function(){},enqueueSetState:function(){}},ns=Object.assign,rs={};function cn(e,t,n){this.props=e,this.context=t,this.refs=rs,this.updater=n||ts}cn.prototype.isReactComponent={};cn.prototype.setState=function(e,t){if(typeof e!="object"&&typeof e!="function"&&e!=null)throw Error("setState(...): takes an object of state variables to update or a function which returns an object of state variables.");this.updater.enqueueSetState(this,e,t,"setState")};cn.prototype.forceUpdate=function(e){this.updater.enqueueForceUpdate(this,e,"forceUpdate")};function ls(){}ls.prototype=cn.prototype;function Wi(e,t,n){this.props=e,this.context=t,this.refs=rs,this.updater=n||ts}var Ki=Wi.prototype=new ls;Ki.constructor=Wi;ns(Ki,cn.prototype);Ki.isPureReactComponent=!0;var Ho=Array.isArray,is=Object.prototype.hasOwnProperty,Yi={current:null},os={key:!0,ref:!0,__self:!0,__source:!0};function us(e,t,n){var r,l={},i=null,o=null;if(t!=null)for(r in t.ref!==void 0&&(o=t.ref),t.key!==void 0&&(i=""+t.key),t)is.call(t,r)&&!os.hasOwnProperty(r)&&(l[r]=t[r]);var u=arguments.length-2;if(u===1)l.children=n;else if(1<u){for(var s=Array(u),c=0;c<u;c++)s[c]=arguments[c+2];l.children=s}if(e&&e.defaultProps)for(r in u=e.defaultProps,u)l[r]===void 0&&(l[r]=u[r]);return{$$typeof:tr,type:e,key:i,ref:o,props:l,_owner:Yi.current}}function qc(e,t){return{$$typeof:tr,type:e.type,key:t,ref:e.ref,props:e.props,_owner:e._owner}}function Xi(e){return typeof e=="object"&&e!==null&&e.$$typeof===tr}function Jc(e){var t={"=":"=0",":":"=2"};return"$"+e.replace(/[=:]/g,function(n){return t[n]})}var Wo=/\/+/g;function _l(e,t){return typeof e=="object"&&e!==null&&e.key!=null?Jc(""+e.key):t.toString(36)}function jr(e,t,n,r,l){var i=typeof e;(i==="undefined"||i==="boolean")&&(e=null);var o=!1;if(e===null)o=!0;else switch(i){case"string":case"number":o=!0;break;case"object":switch(e.$$typeof){case tr:case Uc:o=!0}}if(o)return o=e,l=l(o),e=r===""?"."+_l(o,0):r,Ho(l)?(n="",e!=null&&(n=e.replace(Wo,"$&/")+"/"),jr(l,t,n,"",function(c){return c})):l!=null&&(Xi(l)&&(l=qc(l,n+(!l.key||o&&o.key===l.key?"":(""+l.key).replace(Wo,"$&/")+"/")+e)),t.push(l)),1;if(o=0,r=r===""?".":r+":",Ho(e))for(var u=0;u<e.length;u++){i=e[u];var s=r+_l(i,u);o+=jr(i,t,n,s,l)}else if(s=Zc(e),typeof s=="function")for(e=s.call(e),u=0;!(i=e.next()).done;)i=i.value,s=r+_l(i,u++),o+=jr(i,t,n,s,l);else if(i==="object")throw t=String(e),Error("Objects are not valid as a React child (found: "+(t==="[object Object]"?"object with keys {"+Object.keys(e).join(", ")+"}":t)+"). If you meant to render a collection of children, use an array instead.");return o}function sr(e,t,n){if(e==null)return e;var r=[],l=0;return jr(e,r,"","",function(i){return t.call(n,i,l++)}),r}function bc(e){if(e._status===-1){var t=e._result;t=t(),t.then(function(n){(e._status===0||e._status===-1)&&(e._status=1,e._result=n)},function(n){(e._status===0||e._status===-1)&&(e._status=2,e._result=n)}),e._status===-1&&(e._status=0,e._result=t)}if(e._status===1)return e._result.default;throw e._result}var pe={current:null},_r={transition:null},ef={ReactCurrentDispatcher:pe,ReactCurrentBatchConfig:_r,ReactCurrentOwner:Yi};I.Children={map:sr,forEach:function(e,t,n){sr(e,function(){t.apply(this,arguments)},n)},count:function(e){var t=0;return sr(e,function(){t++}),t},toArray:function(e){return sr(e,function(t){return t})||[]},only:function(e){if(!Xi(e))throw Error("React.Children.only expected to receive a single React element child.");return e}};I.Component=cn;I.Fragment=Vc;I.Profiler=Qc;I.PureComponent=Wi;I.StrictMode=Bc;I.Suspense=Yc;I.__SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED=ef;I.cloneElement=function(e,t,n){if(e==null)throw Error("React.cloneElement(...): The argument must be a React element, but you passed "+e+".");var r=ns({},e.props),l=e.key,i=e.ref,o=e._owner;if(t!=null){if(t.ref!==void 0&&(i=t.ref,o=Yi.current),t.key!==void 0&&(l=""+t.key),e.type&&e.type.defaultProps)var u=e.type.defaultProps;for(s in t)is.call(t,s)&&!os.hasOwnProperty(s)&&(r[s]=t[s]===void 0&&u!==void 0?u[s]:t[s])}var s=arguments.length-2;if(s===1)r.children=n;else if(1<s){u=Array(s);for(var c=0;c<s;c++)u[c]=arguments[c+2];r.children=u}return{$$typeof:tr,type:e.type,key:l,ref:i,props:r,_owner:o}};I.createContext=function(e){return e={$$typeof:Wc,_currentValue:e,_currentValue2:e,_threadCount:0,Provider:null,Consumer:null,_defaultValue:null,_globalName:null},e.Provider={$$typeof:Hc,_context:e},e.Consumer=e};I.createElement=us;I.createFactory=function(e){var t=us.bind(null,e);return t.type=e,t};I.createRef=function(){return{current:null}};I.forwardRef=function(e){return{$$typeof:Kc,render:e}};I.isValidElement=Xi;I.lazy=function(e){return{$$typeof:Gc,_payload:{_status:-1,_result:e},_init:bc}};I.memo=function(e,t){return{$$typeof:Xc,type:e,compare:t===void 0?null:t}};I.startTransition=function(e){var t=_r.transition;_r.transition={};try{e()}finally{_r.transition=t}};I.unstable_act=function(){throw Error("act(...) is not supported in production builds of React.")};I.useCallback=function(e,t){return pe.current.useCallback(e,t)};I.useContext=function(e){return pe.current.useContext(e)};I.useDebugValue=function(){};I.useDeferredValue=function(e){return pe.current.useDeferredValue(e)};I.useEffect=function(e,t){return pe.current.useEffect(e,t)};I.useId=function(){return pe.current.useId()};I.useImperativeHandle=function(e,t,n){return pe.current.useImperativeHandle(e,t,n)};I.useInsertionEffect=function(e,t){return pe.current.useInsertionEffect(e,t)};I.useLayoutEffect=function(e,t){return pe.current.useLayoutEffect(e,t)};I.useMemo=function(e,t){return pe.current.useMemo(e,t)};I.useReducer=function(e,t,n){return pe.current.useReducer(e,t,n)};I.useRef=function(e){return pe.current.useRef(e)};I.useState=function(e){return pe.current.useState(e)};I.useSyncExternalStore=function(e,t,n){return pe.current.useSyncExternalStore(e,t,n)};I.useTransition=function(){return pe.current.useTransition()};I.version="18.2.0";es.exports=I;var m=es.exports;/** - * @license React - * react-jsx-runtime.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var tf=m,nf=Symbol.for("react.element"),rf=Symbol.for("react.fragment"),lf=Object.prototype.hasOwnProperty,of=tf.__SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED.ReactCurrentOwner,uf={key:!0,ref:!0,__self:!0,__source:!0};function ss(e,t,n){var r,l={},i=null,o=null;n!==void 0&&(i=""+n),t.key!==void 0&&(i=""+t.key),t.ref!==void 0&&(o=t.ref);for(r in t)lf.call(t,r)&&!uf.hasOwnProperty(r)&&(l[r]=t[r]);if(e&&e.defaultProps)for(r in t=e.defaultProps,t)l[r]===void 0&&(l[r]=t[r]);return{$$typeof:nf,type:e,key:i,ref:o,props:l,_owner:of.current}}ul.Fragment=rf;ul.jsx=ss;ul.jsxs=ss;bu.exports=ul;var a=bu.exports,as={exports:{}},Ce={},cs={exports:{}},fs={};/** - * @license React - * scheduler.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */(function(e){function t(j,P){var L=j.length;j.push(P);e:for(;0<L;){var X=L-1>>>1,te=j[X];if(0<l(te,P))j[X]=P,j[L]=te,L=X;else break e}}function n(j){return j.length===0?null:j[0]}function r(j){if(j.length===0)return null;var P=j[0],L=j.pop();if(L!==P){j[0]=L;e:for(var X=0,te=j.length,or=te>>>1;X<or;){var xt=2*(X+1)-1,jl=j[xt],kt=xt+1,ur=j[kt];if(0>l(jl,L))kt<te&&0>l(ur,jl)?(j[X]=ur,j[kt]=L,X=kt):(j[X]=jl,j[xt]=L,X=xt);else if(kt<te&&0>l(ur,L))j[X]=ur,j[kt]=L,X=kt;else break e}}return P}function l(j,P){var L=j.sortIndex-P.sortIndex;return L!==0?L:j.id-P.id}if(typeof performance=="object"&&typeof performance.now=="function"){var i=performance;e.unstable_now=function(){return i.now()}}else{var o=Date,u=o.now();e.unstable_now=function(){return o.now()-u}}var s=[],c=[],h=1,f=null,v=3,g=!1,w=!1,k=!1,M=typeof setTimeout=="function"?setTimeout:null,p=typeof clearTimeout=="function"?clearTimeout:null,d=typeof setImmediate<"u"?setImmediate:null;typeof navigator<"u"&&navigator.scheduling!==void 0&&navigator.scheduling.isInputPending!==void 0&&navigator.scheduling.isInputPending.bind(navigator.scheduling);function y(j){for(var P=n(c);P!==null;){if(P.callback===null)r(c);else if(P.startTime<=j)r(c),P.sortIndex=P.expirationTime,t(s,P);else break;P=n(c)}}function S(j){if(k=!1,y(j),!w)if(n(s)!==null)w=!0,El(C);else{var P=n(c);P!==null&&Cl(S,P.startTime-j)}}function C(j,P){w=!1,k&&(k=!1,p(T),T=-1),g=!0;var L=v;try{for(y(P),f=n(s);f!==null&&(!(f.expirationTime>P)||j&&!Le());){var X=f.callback;if(typeof X=="function"){f.callback=null,v=f.priorityLevel;var te=X(f.expirationTime<=P);P=e.unstable_now(),typeof te=="function"?f.callback=te:f===n(s)&&r(s),y(P)}else r(s);f=n(s)}if(f!==null)var or=!0;else{var xt=n(c);xt!==null&&Cl(S,xt.startTime-P),or=!1}return or}finally{f=null,v=L,g=!1}}var _=!1,N=null,T=-1,Y=5,F=-1;function Le(){return!(e.unstable_now()-F<Y)}function mn(){if(N!==null){var j=e.unstable_now();F=j;var P=!0;try{P=N(!0,j)}finally{P?hn():(_=!1,N=null)}}else _=!1}var hn;if(typeof d=="function")hn=function(){d(mn)};else if(typeof MessageChannel<"u"){var Bo=new MessageChannel,Mc=Bo.port2;Bo.port1.onmessage=mn,hn=function(){Mc.postMessage(null)}}else hn=function(){M(mn,0)};function El(j){N=j,_||(_=!0,hn())}function Cl(j,P){T=M(function(){j(e.unstable_now())},P)}e.unstable_IdlePriority=5,e.unstable_ImmediatePriority=1,e.unstable_LowPriority=4,e.unstable_NormalPriority=3,e.unstable_Profiling=null,e.unstable_UserBlockingPriority=2,e.unstable_cancelCallback=function(j){j.callback=null},e.unstable_continueExecution=function(){w||g||(w=!0,El(C))},e.unstable_forceFrameRate=function(j){0>j||125<j?console.error("forceFrameRate takes a positive int between 0 and 125, forcing frame rates higher than 125 fps is not supported"):Y=0<j?Math.floor(1e3/j):5},e.unstable_getCurrentPriorityLevel=function(){return v},e.unstable_getFirstCallbackNode=function(){return n(s)},e.unstable_next=function(j){switch(v){case 1:case 2:case 3:var P=3;break;default:P=v}var L=v;v=P;try{return j()}finally{v=L}},e.unstable_pauseExecution=function(){},e.unstable_requestPaint=function(){},e.unstable_runWithPriority=function(j,P){switch(j){case 1:case 2:case 3:case 4:case 5:break;default:j=3}var L=v;v=j;try{return P()}finally{v=L}},e.unstable_scheduleCallback=function(j,P,L){var X=e.unstable_now();switch(typeof L=="object"&&L!==null?(L=L.delay,L=typeof L=="number"&&0<L?X+L:X):L=X,j){case 1:var te=-1;break;case 2:te=250;break;case 5:te=1073741823;break;case 4:te=1e4;break;default:te=5e3}return te=L+te,j={id:h++,callback:P,priorityLevel:j,startTime:L,expirationTime:te,sortIndex:-1},L>X?(j.sortIndex=L,t(c,j),n(s)===null&&j===n(c)&&(k?(p(T),T=-1):k=!0,Cl(S,L-X))):(j.sortIndex=te,t(s,j),w||g||(w=!0,El(C))),j},e.unstable_shouldYield=Le,e.unstable_wrapCallback=function(j){var P=v;return function(){var L=v;v=P;try{return j.apply(this,arguments)}finally{v=L}}}})(fs);cs.exports=fs;var sf=cs.exports;/** - * @license React - * react-dom.production.min.js - * - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */var ds=m,Ee=sf;function x(e){for(var t="https://reactjs.org/docs/error-decoder.html?invariant="+e,n=1;n<arguments.length;n++)t+="&args[]="+encodeURIComponent(arguments[n]);return"Minified React error #"+e+"; visit "+t+" for the full message or use the non-minified dev environment for full errors and additional helpful warnings."}var ps=new Set,Dn={};function Rt(e,t){nn(e,t),nn(e+"Capture",t)}function nn(e,t){for(Dn[e]=t,e=0;e<t.length;e++)ps.add(t[e])}var Ze=!(typeof window>"u"||typeof window.document>"u"||typeof window.document.createElement>"u"),bl=Object.prototype.hasOwnProperty,af=/^[:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD][:A-Z_a-z\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u02FF\u0370-\u037D\u037F-\u1FFF\u200C-\u200D\u2070-\u218F\u2C00-\u2FEF\u3001-\uD7FF\uF900-\uFDCF\uFDF0-\uFFFD\-.0-9\u00B7\u0300-\u036F\u203F-\u2040]*$/,Ko={},Yo={};function cf(e){return bl.call(Yo,e)?!0:bl.call(Ko,e)?!1:af.test(e)?Yo[e]=!0:(Ko[e]=!0,!1)}function ff(e,t,n,r){if(n!==null&&n.type===0)return!1;switch(typeof t){case"function":case"symbol":return!0;case"boolean":return r?!1:n!==null?!n.acceptsBooleans:(e=e.toLowerCase().slice(0,5),e!=="data-"&&e!=="aria-");default:return!1}}function df(e,t,n,r){if(t===null||typeof t>"u"||ff(e,t,n,r))return!0;if(r)return!1;if(n!==null)switch(n.type){case 3:return!t;case 4:return t===!1;case 5:return isNaN(t);case 6:return isNaN(t)||1>t}return!1}function me(e,t,n,r,l,i,o){this.acceptsBooleans=t===2||t===3||t===4,this.attributeName=r,this.attributeNamespace=l,this.mustUseProperty=n,this.propertyName=e,this.type=t,this.sanitizeURL=i,this.removeEmptyString=o}var oe={};"children dangerouslySetInnerHTML defaultValue defaultChecked innerHTML suppressContentEditableWarning suppressHydrationWarning style".split(" ").forEach(function(e){oe[e]=new me(e,0,!1,e,null,!1,!1)});[["acceptCharset","accept-charset"],["className","class"],["htmlFor","for"],["httpEquiv","http-equiv"]].forEach(function(e){var t=e[0];oe[t]=new me(t,1,!1,e[1],null,!1,!1)});["contentEditable","draggable","spellCheck","value"].forEach(function(e){oe[e]=new me(e,2,!1,e.toLowerCase(),null,!1,!1)});["autoReverse","externalResourcesRequired","focusable","preserveAlpha"].forEach(function(e){oe[e]=new me(e,2,!1,e,null,!1,!1)});"allowFullScreen async autoFocus autoPlay controls default defer disabled disablePictureInPicture disableRemotePlayback formNoValidate hidden loop noModule noValidate open playsInline readOnly required reversed scoped seamless itemScope".split(" ").forEach(function(e){oe[e]=new me(e,3,!1,e.toLowerCase(),null,!1,!1)});["checked","multiple","muted","selected"].forEach(function(e){oe[e]=new me(e,3,!0,e,null,!1,!1)});["capture","download"].forEach(function(e){oe[e]=new me(e,4,!1,e,null,!1,!1)});["cols","rows","size","span"].forEach(function(e){oe[e]=new me(e,6,!1,e,null,!1,!1)});["rowSpan","start"].forEach(function(e){oe[e]=new me(e,5,!1,e.toLowerCase(),null,!1,!1)});var Gi=/[\-:]([a-z])/g;function Zi(e){return e[1].toUpperCase()}"accent-height alignment-baseline arabic-form baseline-shift cap-height clip-path clip-rule color-interpolation color-interpolation-filters color-profile color-rendering dominant-baseline enable-background fill-opacity fill-rule flood-color flood-opacity font-family font-size font-size-adjust font-stretch font-style font-variant font-weight glyph-name glyph-orientation-horizontal glyph-orientation-vertical horiz-adv-x horiz-origin-x image-rendering letter-spacing lighting-color marker-end marker-mid marker-start overline-position overline-thickness paint-order panose-1 pointer-events rendering-intent shape-rendering stop-color stop-opacity strikethrough-position strikethrough-thickness stroke-dasharray stroke-dashoffset stroke-linecap stroke-linejoin stroke-miterlimit stroke-opacity stroke-width text-anchor text-decoration text-rendering underline-position underline-thickness unicode-bidi unicode-range units-per-em v-alphabetic v-hanging v-ideographic v-mathematical vector-effect vert-adv-y vert-origin-x vert-origin-y word-spacing writing-mode xmlns:xlink x-height".split(" ").forEach(function(e){var t=e.replace(Gi,Zi);oe[t]=new me(t,1,!1,e,null,!1,!1)});"xlink:actuate xlink:arcrole xlink:role xlink:show xlink:title xlink:type".split(" ").forEach(function(e){var t=e.replace(Gi,Zi);oe[t]=new me(t,1,!1,e,"http://www.w3.org/1999/xlink",!1,!1)});["xml:base","xml:lang","xml:space"].forEach(function(e){var t=e.replace(Gi,Zi);oe[t]=new me(t,1,!1,e,"http://www.w3.org/XML/1998/namespace",!1,!1)});["tabIndex","crossOrigin"].forEach(function(e){oe[e]=new me(e,1,!1,e.toLowerCase(),null,!1,!1)});oe.xlinkHref=new me("xlinkHref",1,!1,"xlink:href","http://www.w3.org/1999/xlink",!0,!1);["src","href","action","formAction"].forEach(function(e){oe[e]=new me(e,1,!1,e.toLowerCase(),null,!0,!0)});function qi(e,t,n,r){var l=oe.hasOwnProperty(t)?oe[t]:null;(l!==null?l.type!==0:r||!(2<t.length)||t[0]!=="o"&&t[0]!=="O"||t[1]!=="n"&&t[1]!=="N")&&(df(t,n,l,r)&&(n=null),r||l===null?cf(t)&&(n===null?e.removeAttribute(t):e.setAttribute(t,""+n)):l.mustUseProperty?e[l.propertyName]=n===null?l.type===3?!1:"":n:(t=l.attributeName,r=l.attributeNamespace,n===null?e.removeAttribute(t):(l=l.type,n=l===3||l===4&&n===!0?"":""+n,r?e.setAttributeNS(r,t,n):e.setAttribute(t,n))))}var et=ds.__SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED,ar=Symbol.for("react.element"),Dt=Symbol.for("react.portal"),$t=Symbol.for("react.fragment"),Ji=Symbol.for("react.strict_mode"),ei=Symbol.for("react.profiler"),ms=Symbol.for("react.provider"),hs=Symbol.for("react.context"),bi=Symbol.for("react.forward_ref"),ti=Symbol.for("react.suspense"),ni=Symbol.for("react.suspense_list"),eo=Symbol.for("react.memo"),nt=Symbol.for("react.lazy"),ys=Symbol.for("react.offscreen"),Xo=Symbol.iterator;function vn(e){return e===null||typeof e!="object"?null:(e=Xo&&e[Xo]||e["@@iterator"],typeof e=="function"?e:null)}var H=Object.assign,Nl;function jn(e){if(Nl===void 0)try{throw Error()}catch(n){var t=n.stack.trim().match(/\n( *(at )?)/);Nl=t&&t[1]||""}return` -`+Nl+e}var Tl=!1;function Ol(e,t){if(!e||Tl)return"";Tl=!0;var n=Error.prepareStackTrace;Error.prepareStackTrace=void 0;try{if(t)if(t=function(){throw Error()},Object.defineProperty(t.prototype,"props",{set:function(){throw Error()}}),typeof Reflect=="object"&&Reflect.construct){try{Reflect.construct(t,[])}catch(c){var r=c}Reflect.construct(e,[],t)}else{try{t.call()}catch(c){r=c}e.call(t.prototype)}else{try{throw Error()}catch(c){r=c}e()}}catch(c){if(c&&r&&typeof c.stack=="string"){for(var l=c.stack.split(` -`),i=r.stack.split(` -`),o=l.length-1,u=i.length-1;1<=o&&0<=u&&l[o]!==i[u];)u--;for(;1<=o&&0<=u;o--,u--)if(l[o]!==i[u]){if(o!==1||u!==1)do if(o--,u--,0>u||l[o]!==i[u]){var s=` -`+l[o].replace(" at new "," at ");return e.displayName&&s.includes("<anonymous>")&&(s=s.replace("<anonymous>",e.displayName)),s}while(1<=o&&0<=u);break}}}finally{Tl=!1,Error.prepareStackTrace=n}return(e=e?e.displayName||e.name:"")?jn(e):""}function pf(e){switch(e.tag){case 5:return jn(e.type);case 16:return jn("Lazy");case 13:return jn("Suspense");case 19:return jn("SuspenseList");case 0:case 2:case 15:return e=Ol(e.type,!1),e;case 11:return e=Ol(e.type.render,!1),e;case 1:return e=Ol(e.type,!0),e;default:return""}}function ri(e){if(e==null)return null;if(typeof e=="function")return e.displayName||e.name||null;if(typeof e=="string")return e;switch(e){case $t:return"Fragment";case Dt:return"Portal";case ei:return"Profiler";case Ji:return"StrictMode";case ti:return"Suspense";case ni:return"SuspenseList"}if(typeof e=="object")switch(e.$$typeof){case hs:return(e.displayName||"Context")+".Consumer";case ms:return(e._context.displayName||"Context")+".Provider";case bi:var t=e.render;return e=e.displayName,e||(e=t.displayName||t.name||"",e=e!==""?"ForwardRef("+e+")":"ForwardRef"),e;case eo:return t=e.displayName||null,t!==null?t:ri(e.type)||"Memo";case nt:t=e._payload,e=e._init;try{return ri(e(t))}catch{}}return null}function mf(e){var t=e.type;switch(e.tag){case 24:return"Cache";case 9:return(t.displayName||"Context")+".Consumer";case 10:return(t._context.displayName||"Context")+".Provider";case 18:return"DehydratedFragment";case 11:return e=t.render,e=e.displayName||e.name||"",t.displayName||(e!==""?"ForwardRef("+e+")":"ForwardRef");case 7:return"Fragment";case 5:return t;case 4:return"Portal";case 3:return"Root";case 6:return"Text";case 16:return ri(t);case 8:return t===Ji?"StrictMode":"Mode";case 22:return"Offscreen";case 12:return"Profiler";case 21:return"Scope";case 13:return"Suspense";case 19:return"SuspenseList";case 25:return"TracingMarker";case 1:case 0:case 17:case 2:case 14:case 15:if(typeof t=="function")return t.displayName||t.name||null;if(typeof t=="string")return t}return null}function yt(e){switch(typeof e){case"boolean":case"number":case"string":case"undefined":return e;case"object":return e;default:return""}}function vs(e){var t=e.type;return(e=e.nodeName)&&e.toLowerCase()==="input"&&(t==="checkbox"||t==="radio")}function hf(e){var t=vs(e)?"checked":"value",n=Object.getOwnPropertyDescriptor(e.constructor.prototype,t),r=""+e[t];if(!e.hasOwnProperty(t)&&typeof n<"u"&&typeof n.get=="function"&&typeof n.set=="function"){var l=n.get,i=n.set;return Object.defineProperty(e,t,{configurable:!0,get:function(){return l.call(this)},set:function(o){r=""+o,i.call(this,o)}}),Object.defineProperty(e,t,{enumerable:n.enumerable}),{getValue:function(){return r},setValue:function(o){r=""+o},stopTracking:function(){e._valueTracker=null,delete e[t]}}}}function cr(e){e._valueTracker||(e._valueTracker=hf(e))}function gs(e){if(!e)return!1;var t=e._valueTracker;if(!t)return!0;var n=t.getValue(),r="";return e&&(r=vs(e)?e.checked?"true":"false":e.value),e=r,e!==n?(t.setValue(e),!0):!1}function Mr(e){if(e=e||(typeof document<"u"?document:void 0),typeof e>"u")return null;try{return e.activeElement||e.body}catch{return e.body}}function li(e,t){var n=t.checked;return H({},t,{defaultChecked:void 0,defaultValue:void 0,value:void 0,checked:n??e._wrapperState.initialChecked})}function Go(e,t){var n=t.defaultValue==null?"":t.defaultValue,r=t.checked!=null?t.checked:t.defaultChecked;n=yt(t.value!=null?t.value:n),e._wrapperState={initialChecked:r,initialValue:n,controlled:t.type==="checkbox"||t.type==="radio"?t.checked!=null:t.value!=null}}function ws(e,t){t=t.checked,t!=null&&qi(e,"checked",t,!1)}function ii(e,t){ws(e,t);var n=yt(t.value),r=t.type;if(n!=null)r==="number"?(n===0&&e.value===""||e.value!=n)&&(e.value=""+n):e.value!==""+n&&(e.value=""+n);else if(r==="submit"||r==="reset"){e.removeAttribute("value");return}t.hasOwnProperty("value")?oi(e,t.type,n):t.hasOwnProperty("defaultValue")&&oi(e,t.type,yt(t.defaultValue)),t.checked==null&&t.defaultChecked!=null&&(e.defaultChecked=!!t.defaultChecked)}function Zo(e,t,n){if(t.hasOwnProperty("value")||t.hasOwnProperty("defaultValue")){var r=t.type;if(!(r!=="submit"&&r!=="reset"||t.value!==void 0&&t.value!==null))return;t=""+e._wrapperState.initialValue,n||t===e.value||(e.value=t),e.defaultValue=t}n=e.name,n!==""&&(e.name=""),e.defaultChecked=!!e._wrapperState.initialChecked,n!==""&&(e.name=n)}function oi(e,t,n){(t!=="number"||Mr(e.ownerDocument)!==e)&&(n==null?e.defaultValue=""+e._wrapperState.initialValue:e.defaultValue!==""+n&&(e.defaultValue=""+n))}var _n=Array.isArray;function Zt(e,t,n,r){if(e=e.options,t){t={};for(var l=0;l<n.length;l++)t["$"+n[l]]=!0;for(n=0;n<e.length;n++)l=t.hasOwnProperty("$"+e[n].value),e[n].selected!==l&&(e[n].selected=l),l&&r&&(e[n].defaultSelected=!0)}else{for(n=""+yt(n),t=null,l=0;l<e.length;l++){if(e[l].value===n){e[l].selected=!0,r&&(e[l].defaultSelected=!0);return}t!==null||e[l].disabled||(t=e[l])}t!==null&&(t.selected=!0)}}function ui(e,t){if(t.dangerouslySetInnerHTML!=null)throw Error(x(91));return H({},t,{value:void 0,defaultValue:void 0,children:""+e._wrapperState.initialValue})}function qo(e,t){var n=t.value;if(n==null){if(n=t.children,t=t.defaultValue,n!=null){if(t!=null)throw Error(x(92));if(_n(n)){if(1<n.length)throw Error(x(93));n=n[0]}t=n}t==null&&(t=""),n=t}e._wrapperState={initialValue:yt(n)}}function Ss(e,t){var n=yt(t.value),r=yt(t.defaultValue);n!=null&&(n=""+n,n!==e.value&&(e.value=n),t.defaultValue==null&&e.defaultValue!==n&&(e.defaultValue=n)),r!=null&&(e.defaultValue=""+r)}function Jo(e){var t=e.textContent;t===e._wrapperState.initialValue&&t!==""&&t!==null&&(e.value=t)}function xs(e){switch(e){case"svg":return"http://www.w3.org/2000/svg";case"math":return"http://www.w3.org/1998/Math/MathML";default:return"http://www.w3.org/1999/xhtml"}}function si(e,t){return e==null||e==="http://www.w3.org/1999/xhtml"?xs(t):e==="http://www.w3.org/2000/svg"&&t==="foreignObject"?"http://www.w3.org/1999/xhtml":e}var fr,ks=function(e){return typeof MSApp<"u"&&MSApp.execUnsafeLocalFunction?function(t,n,r,l){MSApp.execUnsafeLocalFunction(function(){return e(t,n,r,l)})}:e}(function(e,t){if(e.namespaceURI!=="http://www.w3.org/2000/svg"||"innerHTML"in e)e.innerHTML=t;else{for(fr=fr||document.createElement("div"),fr.innerHTML="<svg>"+t.valueOf().toString()+"</svg>",t=fr.firstChild;e.firstChild;)e.removeChild(e.firstChild);for(;t.firstChild;)e.appendChild(t.firstChild)}});function $n(e,t){if(t){var n=e.firstChild;if(n&&n===e.lastChild&&n.nodeType===3){n.nodeValue=t;return}}e.textContent=t}var On={animationIterationCount:!0,aspectRatio:!0,borderImageOutset:!0,borderImageSlice:!0,borderImageWidth:!0,boxFlex:!0,boxFlexGroup:!0,boxOrdinalGroup:!0,columnCount:!0,columns:!0,flex:!0,flexGrow:!0,flexPositive:!0,flexShrink:!0,flexNegative:!0,flexOrder:!0,gridArea:!0,gridRow:!0,gridRowEnd:!0,gridRowSpan:!0,gridRowStart:!0,gridColumn:!0,gridColumnEnd:!0,gridColumnSpan:!0,gridColumnStart:!0,fontWeight:!0,lineClamp:!0,lineHeight:!0,opacity:!0,order:!0,orphans:!0,tabSize:!0,widows:!0,zIndex:!0,zoom:!0,fillOpacity:!0,floodOpacity:!0,stopOpacity:!0,strokeDasharray:!0,strokeDashoffset:!0,strokeMiterlimit:!0,strokeOpacity:!0,strokeWidth:!0},yf=["Webkit","ms","Moz","O"];Object.keys(On).forEach(function(e){yf.forEach(function(t){t=t+e.charAt(0).toUpperCase()+e.substring(1),On[t]=On[e]})});function Es(e,t,n){return t==null||typeof t=="boolean"||t===""?"":n||typeof t!="number"||t===0||On.hasOwnProperty(e)&&On[e]?(""+t).trim():t+"px"}function Cs(e,t){e=e.style;for(var n in t)if(t.hasOwnProperty(n)){var r=n.indexOf("--")===0,l=Es(n,t[n],r);n==="float"&&(n="cssFloat"),r?e.setProperty(n,l):e[n]=l}}var vf=H({menuitem:!0},{area:!0,base:!0,br:!0,col:!0,embed:!0,hr:!0,img:!0,input:!0,keygen:!0,link:!0,meta:!0,param:!0,source:!0,track:!0,wbr:!0});function ai(e,t){if(t){if(vf[e]&&(t.children!=null||t.dangerouslySetInnerHTML!=null))throw Error(x(137,e));if(t.dangerouslySetInnerHTML!=null){if(t.children!=null)throw Error(x(60));if(typeof t.dangerouslySetInnerHTML!="object"||!("__html"in t.dangerouslySetInnerHTML))throw Error(x(61))}if(t.style!=null&&typeof t.style!="object")throw Error(x(62))}}function ci(e,t){if(e.indexOf("-")===-1)return typeof t.is=="string";switch(e){case"annotation-xml":case"color-profile":case"font-face":case"font-face-src":case"font-face-uri":case"font-face-format":case"font-face-name":case"missing-glyph":return!1;default:return!0}}var fi=null;function to(e){return e=e.target||e.srcElement||window,e.correspondingUseElement&&(e=e.correspondingUseElement),e.nodeType===3?e.parentNode:e}var di=null,qt=null,Jt=null;function bo(e){if(e=lr(e)){if(typeof di!="function")throw Error(x(280));var t=e.stateNode;t&&(t=dl(t),di(e.stateNode,e.type,t))}}function js(e){qt?Jt?Jt.push(e):Jt=[e]:qt=e}function _s(){if(qt){var e=qt,t=Jt;if(Jt=qt=null,bo(e),t)for(e=0;e<t.length;e++)bo(t[e])}}function Ns(e,t){return e(t)}function Ts(){}var Pl=!1;function Os(e,t,n){if(Pl)return e(t,n);Pl=!0;try{return Ns(e,t,n)}finally{Pl=!1,(qt!==null||Jt!==null)&&(Ts(),_s())}}function Un(e,t){var n=e.stateNode;if(n===null)return null;var r=dl(n);if(r===null)return null;n=r[t];e:switch(t){case"onClick":case"onClickCapture":case"onDoubleClick":case"onDoubleClickCapture":case"onMouseDown":case"onMouseDownCapture":case"onMouseMove":case"onMouseMoveCapture":case"onMouseUp":case"onMouseUpCapture":case"onMouseEnter":(r=!r.disabled)||(e=e.type,r=!(e==="button"||e==="input"||e==="select"||e==="textarea")),e=!r;break e;default:e=!1}if(e)return null;if(n&&typeof n!="function")throw Error(x(231,t,typeof n));return n}var pi=!1;if(Ze)try{var gn={};Object.defineProperty(gn,"passive",{get:function(){pi=!0}}),window.addEventListener("test",gn,gn),window.removeEventListener("test",gn,gn)}catch{pi=!1}function gf(e,t,n,r,l,i,o,u,s){var c=Array.prototype.slice.call(arguments,3);try{t.apply(n,c)}catch(h){this.onError(h)}}var Pn=!1,Dr=null,$r=!1,mi=null,wf={onError:function(e){Pn=!0,Dr=e}};function Sf(e,t,n,r,l,i,o,u,s){Pn=!1,Dr=null,gf.apply(wf,arguments)}function xf(e,t,n,r,l,i,o,u,s){if(Sf.apply(this,arguments),Pn){if(Pn){var c=Dr;Pn=!1,Dr=null}else throw Error(x(198));$r||($r=!0,mi=c)}}function At(e){var t=e,n=e;if(e.alternate)for(;t.return;)t=t.return;else{e=t;do t=e,t.flags&4098&&(n=t.return),e=t.return;while(e)}return t.tag===3?n:null}function Ps(e){if(e.tag===13){var t=e.memoizedState;if(t===null&&(e=e.alternate,e!==null&&(t=e.memoizedState)),t!==null)return t.dehydrated}return null}function eu(e){if(At(e)!==e)throw Error(x(188))}function kf(e){var t=e.alternate;if(!t){if(t=At(e),t===null)throw Error(x(188));return t!==e?null:e}for(var n=e,r=t;;){var l=n.return;if(l===null)break;var i=l.alternate;if(i===null){if(r=l.return,r!==null){n=r;continue}break}if(l.child===i.child){for(i=l.child;i;){if(i===n)return eu(l),e;if(i===r)return eu(l),t;i=i.sibling}throw Error(x(188))}if(n.return!==r.return)n=l,r=i;else{for(var o=!1,u=l.child;u;){if(u===n){o=!0,n=l,r=i;break}if(u===r){o=!0,r=l,n=i;break}u=u.sibling}if(!o){for(u=i.child;u;){if(u===n){o=!0,n=i,r=l;break}if(u===r){o=!0,r=i,n=l;break}u=u.sibling}if(!o)throw Error(x(189))}}if(n.alternate!==r)throw Error(x(190))}if(n.tag!==3)throw Error(x(188));return n.stateNode.current===n?e:t}function zs(e){return e=kf(e),e!==null?Ls(e):null}function Ls(e){if(e.tag===5||e.tag===6)return e;for(e=e.child;e!==null;){var t=Ls(e);if(t!==null)return t;e=e.sibling}return null}var Is=Ee.unstable_scheduleCallback,tu=Ee.unstable_cancelCallback,Ef=Ee.unstable_shouldYield,Cf=Ee.unstable_requestPaint,G=Ee.unstable_now,jf=Ee.unstable_getCurrentPriorityLevel,no=Ee.unstable_ImmediatePriority,Fs=Ee.unstable_UserBlockingPriority,Ur=Ee.unstable_NormalPriority,_f=Ee.unstable_LowPriority,Rs=Ee.unstable_IdlePriority,sl=null,Qe=null;function Nf(e){if(Qe&&typeof Qe.onCommitFiberRoot=="function")try{Qe.onCommitFiberRoot(sl,e,void 0,(e.current.flags&128)===128)}catch{}}var Me=Math.clz32?Math.clz32:Pf,Tf=Math.log,Of=Math.LN2;function Pf(e){return e>>>=0,e===0?32:31-(Tf(e)/Of|0)|0}var dr=64,pr=4194304;function Nn(e){switch(e&-e){case 1:return 1;case 2:return 2;case 4:return 4;case 8:return 8;case 16:return 16;case 32:return 32;case 64:case 128:case 256:case 512:case 1024:case 2048:case 4096:case 8192:case 16384:case 32768:case 65536:case 131072:case 262144:case 524288:case 1048576:case 2097152:return e&4194240;case 4194304:case 8388608:case 16777216:case 33554432:case 67108864:return e&130023424;case 134217728:return 134217728;case 268435456:return 268435456;case 536870912:return 536870912;case 1073741824:return 1073741824;default:return e}}function Vr(e,t){var n=e.pendingLanes;if(n===0)return 0;var r=0,l=e.suspendedLanes,i=e.pingedLanes,o=n&268435455;if(o!==0){var u=o&~l;u!==0?r=Nn(u):(i&=o,i!==0&&(r=Nn(i)))}else o=n&~l,o!==0?r=Nn(o):i!==0&&(r=Nn(i));if(r===0)return 0;if(t!==0&&t!==r&&!(t&l)&&(l=r&-r,i=t&-t,l>=i||l===16&&(i&4194240)!==0))return t;if(r&4&&(r|=n&16),t=e.entangledLanes,t!==0)for(e=e.entanglements,t&=r;0<t;)n=31-Me(t),l=1<<n,r|=e[n],t&=~l;return r}function zf(e,t){switch(e){case 1:case 2:case 4:return t+250;case 8:case 16:case 32:case 64:case 128:case 256:case 512:case 1024:case 2048:case 4096:case 8192:case 16384:case 32768:case 65536:case 131072:case 262144:case 524288:case 1048576:case 2097152:return t+5e3;case 4194304:case 8388608:case 16777216:case 33554432:case 67108864:return-1;case 134217728:case 268435456:case 536870912:case 1073741824:return-1;default:return-1}}function Lf(e,t){for(var n=e.suspendedLanes,r=e.pingedLanes,l=e.expirationTimes,i=e.pendingLanes;0<i;){var o=31-Me(i),u=1<<o,s=l[o];s===-1?(!(u&n)||u&r)&&(l[o]=zf(u,t)):s<=t&&(e.expiredLanes|=u),i&=~u}}function hi(e){return e=e.pendingLanes&-1073741825,e!==0?e:e&1073741824?1073741824:0}function As(){var e=dr;return dr<<=1,!(dr&4194240)&&(dr=64),e}function zl(e){for(var t=[],n=0;31>n;n++)t.push(e);return t}function nr(e,t,n){e.pendingLanes|=t,t!==536870912&&(e.suspendedLanes=0,e.pingedLanes=0),e=e.eventTimes,t=31-Me(t),e[t]=n}function If(e,t){var n=e.pendingLanes&~t;e.pendingLanes=t,e.suspendedLanes=0,e.pingedLanes=0,e.expiredLanes&=t,e.mutableReadLanes&=t,e.entangledLanes&=t,t=e.entanglements;var r=e.eventTimes;for(e=e.expirationTimes;0<n;){var l=31-Me(n),i=1<<l;t[l]=0,r[l]=-1,e[l]=-1,n&=~i}}function ro(e,t){var n=e.entangledLanes|=t;for(e=e.entanglements;n;){var r=31-Me(n),l=1<<r;l&t|e[r]&t&&(e[r]|=t),n&=~l}}var A=0;function Ms(e){return e&=-e,1<e?4<e?e&268435455?16:536870912:4:1}var Ds,lo,$s,Us,Vs,yi=!1,mr=[],st=null,at=null,ct=null,Vn=new Map,Bn=new Map,lt=[],Ff="mousedown mouseup touchcancel touchend touchstart auxclick dblclick pointercancel pointerdown pointerup dragend dragstart drop compositionend compositionstart keydown keypress keyup input textInput copy cut paste click change contextmenu reset submit".split(" ");function nu(e,t){switch(e){case"focusin":case"focusout":st=null;break;case"dragenter":case"dragleave":at=null;break;case"mouseover":case"mouseout":ct=null;break;case"pointerover":case"pointerout":Vn.delete(t.pointerId);break;case"gotpointercapture":case"lostpointercapture":Bn.delete(t.pointerId)}}function wn(e,t,n,r,l,i){return e===null||e.nativeEvent!==i?(e={blockedOn:t,domEventName:n,eventSystemFlags:r,nativeEvent:i,targetContainers:[l]},t!==null&&(t=lr(t),t!==null&&lo(t)),e):(e.eventSystemFlags|=r,t=e.targetContainers,l!==null&&t.indexOf(l)===-1&&t.push(l),e)}function Rf(e,t,n,r,l){switch(t){case"focusin":return st=wn(st,e,t,n,r,l),!0;case"dragenter":return at=wn(at,e,t,n,r,l),!0;case"mouseover":return ct=wn(ct,e,t,n,r,l),!0;case"pointerover":var i=l.pointerId;return Vn.set(i,wn(Vn.get(i)||null,e,t,n,r,l)),!0;case"gotpointercapture":return i=l.pointerId,Bn.set(i,wn(Bn.get(i)||null,e,t,n,r,l)),!0}return!1}function Bs(e){var t=jt(e.target);if(t!==null){var n=At(t);if(n!==null){if(t=n.tag,t===13){if(t=Ps(n),t!==null){e.blockedOn=t,Vs(e.priority,function(){$s(n)});return}}else if(t===3&&n.stateNode.current.memoizedState.isDehydrated){e.blockedOn=n.tag===3?n.stateNode.containerInfo:null;return}}}e.blockedOn=null}function Nr(e){if(e.blockedOn!==null)return!1;for(var t=e.targetContainers;0<t.length;){var n=vi(e.domEventName,e.eventSystemFlags,t[0],e.nativeEvent);if(n===null){n=e.nativeEvent;var r=new n.constructor(n.type,n);fi=r,n.target.dispatchEvent(r),fi=null}else return t=lr(n),t!==null&&lo(t),e.blockedOn=n,!1;t.shift()}return!0}function ru(e,t,n){Nr(e)&&n.delete(t)}function Af(){yi=!1,st!==null&&Nr(st)&&(st=null),at!==null&&Nr(at)&&(at=null),ct!==null&&Nr(ct)&&(ct=null),Vn.forEach(ru),Bn.forEach(ru)}function Sn(e,t){e.blockedOn===t&&(e.blockedOn=null,yi||(yi=!0,Ee.unstable_scheduleCallback(Ee.unstable_NormalPriority,Af)))}function Qn(e){function t(l){return Sn(l,e)}if(0<mr.length){Sn(mr[0],e);for(var n=1;n<mr.length;n++){var r=mr[n];r.blockedOn===e&&(r.blockedOn=null)}}for(st!==null&&Sn(st,e),at!==null&&Sn(at,e),ct!==null&&Sn(ct,e),Vn.forEach(t),Bn.forEach(t),n=0;n<lt.length;n++)r=lt[n],r.blockedOn===e&&(r.blockedOn=null);for(;0<lt.length&&(n=lt[0],n.blockedOn===null);)Bs(n),n.blockedOn===null&<.shift()}var bt=et.ReactCurrentBatchConfig,Br=!0;function Mf(e,t,n,r){var l=A,i=bt.transition;bt.transition=null;try{A=1,io(e,t,n,r)}finally{A=l,bt.transition=i}}function Df(e,t,n,r){var l=A,i=bt.transition;bt.transition=null;try{A=4,io(e,t,n,r)}finally{A=l,bt.transition=i}}function io(e,t,n,r){if(Br){var l=vi(e,t,n,r);if(l===null)Vl(e,t,r,Qr,n),nu(e,r);else if(Rf(l,e,t,n,r))r.stopPropagation();else if(nu(e,r),t&4&&-1<Ff.indexOf(e)){for(;l!==null;){var i=lr(l);if(i!==null&&Ds(i),i=vi(e,t,n,r),i===null&&Vl(e,t,r,Qr,n),i===l)break;l=i}l!==null&&r.stopPropagation()}else Vl(e,t,r,null,n)}}var Qr=null;function vi(e,t,n,r){if(Qr=null,e=to(r),e=jt(e),e!==null)if(t=At(e),t===null)e=null;else if(n=t.tag,n===13){if(e=Ps(t),e!==null)return e;e=null}else if(n===3){if(t.stateNode.current.memoizedState.isDehydrated)return t.tag===3?t.stateNode.containerInfo:null;e=null}else t!==e&&(e=null);return Qr=e,null}function Qs(e){switch(e){case"cancel":case"click":case"close":case"contextmenu":case"copy":case"cut":case"auxclick":case"dblclick":case"dragend":case"dragstart":case"drop":case"focusin":case"focusout":case"input":case"invalid":case"keydown":case"keypress":case"keyup":case"mousedown":case"mouseup":case"paste":case"pause":case"play":case"pointercancel":case"pointerdown":case"pointerup":case"ratechange":case"reset":case"resize":case"seeked":case"submit":case"touchcancel":case"touchend":case"touchstart":case"volumechange":case"change":case"selectionchange":case"textInput":case"compositionstart":case"compositionend":case"compositionupdate":case"beforeblur":case"afterblur":case"beforeinput":case"blur":case"fullscreenchange":case"focus":case"hashchange":case"popstate":case"select":case"selectstart":return 1;case"drag":case"dragenter":case"dragexit":case"dragleave":case"dragover":case"mousemove":case"mouseout":case"mouseover":case"pointermove":case"pointerout":case"pointerover":case"scroll":case"toggle":case"touchmove":case"wheel":case"mouseenter":case"mouseleave":case"pointerenter":case"pointerleave":return 4;case"message":switch(jf()){case no:return 1;case Fs:return 4;case Ur:case _f:return 16;case Rs:return 536870912;default:return 16}default:return 16}}var ot=null,oo=null,Tr=null;function Hs(){if(Tr)return Tr;var e,t=oo,n=t.length,r,l="value"in ot?ot.value:ot.textContent,i=l.length;for(e=0;e<n&&t[e]===l[e];e++);var o=n-e;for(r=1;r<=o&&t[n-r]===l[i-r];r++);return Tr=l.slice(e,1<r?1-r:void 0)}function Or(e){var t=e.keyCode;return"charCode"in e?(e=e.charCode,e===0&&t===13&&(e=13)):e=t,e===10&&(e=13),32<=e||e===13?e:0}function hr(){return!0}function lu(){return!1}function je(e){function t(n,r,l,i,o){this._reactName=n,this._targetInst=l,this.type=r,this.nativeEvent=i,this.target=o,this.currentTarget=null;for(var u in e)e.hasOwnProperty(u)&&(n=e[u],this[u]=n?n(i):i[u]);return this.isDefaultPrevented=(i.defaultPrevented!=null?i.defaultPrevented:i.returnValue===!1)?hr:lu,this.isPropagationStopped=lu,this}return H(t.prototype,{preventDefault:function(){this.defaultPrevented=!0;var n=this.nativeEvent;n&&(n.preventDefault?n.preventDefault():typeof n.returnValue!="unknown"&&(n.returnValue=!1),this.isDefaultPrevented=hr)},stopPropagation:function(){var n=this.nativeEvent;n&&(n.stopPropagation?n.stopPropagation():typeof n.cancelBubble!="unknown"&&(n.cancelBubble=!0),this.isPropagationStopped=hr)},persist:function(){},isPersistent:hr}),t}var fn={eventPhase:0,bubbles:0,cancelable:0,timeStamp:function(e){return e.timeStamp||Date.now()},defaultPrevented:0,isTrusted:0},uo=je(fn),rr=H({},fn,{view:0,detail:0}),$f=je(rr),Ll,Il,xn,al=H({},rr,{screenX:0,screenY:0,clientX:0,clientY:0,pageX:0,pageY:0,ctrlKey:0,shiftKey:0,altKey:0,metaKey:0,getModifierState:so,button:0,buttons:0,relatedTarget:function(e){return e.relatedTarget===void 0?e.fromElement===e.srcElement?e.toElement:e.fromElement:e.relatedTarget},movementX:function(e){return"movementX"in e?e.movementX:(e!==xn&&(xn&&e.type==="mousemove"?(Ll=e.screenX-xn.screenX,Il=e.screenY-xn.screenY):Il=Ll=0,xn=e),Ll)},movementY:function(e){return"movementY"in e?e.movementY:Il}}),iu=je(al),Uf=H({},al,{dataTransfer:0}),Vf=je(Uf),Bf=H({},rr,{relatedTarget:0}),Fl=je(Bf),Qf=H({},fn,{animationName:0,elapsedTime:0,pseudoElement:0}),Hf=je(Qf),Wf=H({},fn,{clipboardData:function(e){return"clipboardData"in e?e.clipboardData:window.clipboardData}}),Kf=je(Wf),Yf=H({},fn,{data:0}),ou=je(Yf),Xf={Esc:"Escape",Spacebar:" ",Left:"ArrowLeft",Up:"ArrowUp",Right:"ArrowRight",Down:"ArrowDown",Del:"Delete",Win:"OS",Menu:"ContextMenu",Apps:"ContextMenu",Scroll:"ScrollLock",MozPrintableKey:"Unidentified"},Gf={8:"Backspace",9:"Tab",12:"Clear",13:"Enter",16:"Shift",17:"Control",18:"Alt",19:"Pause",20:"CapsLock",27:"Escape",32:" ",33:"PageUp",34:"PageDown",35:"End",36:"Home",37:"ArrowLeft",38:"ArrowUp",39:"ArrowRight",40:"ArrowDown",45:"Insert",46:"Delete",112:"F1",113:"F2",114:"F3",115:"F4",116:"F5",117:"F6",118:"F7",119:"F8",120:"F9",121:"F10",122:"F11",123:"F12",144:"NumLock",145:"ScrollLock",224:"Meta"},Zf={Alt:"altKey",Control:"ctrlKey",Meta:"metaKey",Shift:"shiftKey"};function qf(e){var t=this.nativeEvent;return t.getModifierState?t.getModifierState(e):(e=Zf[e])?!!t[e]:!1}function so(){return qf}var Jf=H({},rr,{key:function(e){if(e.key){var t=Xf[e.key]||e.key;if(t!=="Unidentified")return t}return e.type==="keypress"?(e=Or(e),e===13?"Enter":String.fromCharCode(e)):e.type==="keydown"||e.type==="keyup"?Gf[e.keyCode]||"Unidentified":""},code:0,location:0,ctrlKey:0,shiftKey:0,altKey:0,metaKey:0,repeat:0,locale:0,getModifierState:so,charCode:function(e){return e.type==="keypress"?Or(e):0},keyCode:function(e){return e.type==="keydown"||e.type==="keyup"?e.keyCode:0},which:function(e){return e.type==="keypress"?Or(e):e.type==="keydown"||e.type==="keyup"?e.keyCode:0}}),bf=je(Jf),ed=H({},al,{pointerId:0,width:0,height:0,pressure:0,tangentialPressure:0,tiltX:0,tiltY:0,twist:0,pointerType:0,isPrimary:0}),uu=je(ed),td=H({},rr,{touches:0,targetTouches:0,changedTouches:0,altKey:0,metaKey:0,ctrlKey:0,shiftKey:0,getModifierState:so}),nd=je(td),rd=H({},fn,{propertyName:0,elapsedTime:0,pseudoElement:0}),ld=je(rd),id=H({},al,{deltaX:function(e){return"deltaX"in e?e.deltaX:"wheelDeltaX"in e?-e.wheelDeltaX:0},deltaY:function(e){return"deltaY"in e?e.deltaY:"wheelDeltaY"in e?-e.wheelDeltaY:"wheelDelta"in e?-e.wheelDelta:0},deltaZ:0,deltaMode:0}),od=je(id),ud=[9,13,27,32],ao=Ze&&"CompositionEvent"in window,zn=null;Ze&&"documentMode"in document&&(zn=document.documentMode);var sd=Ze&&"TextEvent"in window&&!zn,Ws=Ze&&(!ao||zn&&8<zn&&11>=zn),su=String.fromCharCode(32),au=!1;function Ks(e,t){switch(e){case"keyup":return ud.indexOf(t.keyCode)!==-1;case"keydown":return t.keyCode!==229;case"keypress":case"mousedown":case"focusout":return!0;default:return!1}}function Ys(e){return e=e.detail,typeof e=="object"&&"data"in e?e.data:null}var Ut=!1;function ad(e,t){switch(e){case"compositionend":return Ys(t);case"keypress":return t.which!==32?null:(au=!0,su);case"textInput":return e=t.data,e===su&&au?null:e;default:return null}}function cd(e,t){if(Ut)return e==="compositionend"||!ao&&Ks(e,t)?(e=Hs(),Tr=oo=ot=null,Ut=!1,e):null;switch(e){case"paste":return null;case"keypress":if(!(t.ctrlKey||t.altKey||t.metaKey)||t.ctrlKey&&t.altKey){if(t.char&&1<t.char.length)return t.char;if(t.which)return String.fromCharCode(t.which)}return null;case"compositionend":return Ws&&t.locale!=="ko"?null:t.data;default:return null}}var fd={color:!0,date:!0,datetime:!0,"datetime-local":!0,email:!0,month:!0,number:!0,password:!0,range:!0,search:!0,tel:!0,text:!0,time:!0,url:!0,week:!0};function cu(e){var t=e&&e.nodeName&&e.nodeName.toLowerCase();return t==="input"?!!fd[e.type]:t==="textarea"}function Xs(e,t,n,r){js(r),t=Hr(t,"onChange"),0<t.length&&(n=new uo("onChange","change",null,n,r),e.push({event:n,listeners:t}))}var Ln=null,Hn=null;function dd(e){ia(e,0)}function cl(e){var t=Qt(e);if(gs(t))return e}function pd(e,t){if(e==="change")return t}var Gs=!1;if(Ze){var Rl;if(Ze){var Al="oninput"in document;if(!Al){var fu=document.createElement("div");fu.setAttribute("oninput","return;"),Al=typeof fu.oninput=="function"}Rl=Al}else Rl=!1;Gs=Rl&&(!document.documentMode||9<document.documentMode)}function du(){Ln&&(Ln.detachEvent("onpropertychange",Zs),Hn=Ln=null)}function Zs(e){if(e.propertyName==="value"&&cl(Hn)){var t=[];Xs(t,Hn,e,to(e)),Os(dd,t)}}function md(e,t,n){e==="focusin"?(du(),Ln=t,Hn=n,Ln.attachEvent("onpropertychange",Zs)):e==="focusout"&&du()}function hd(e){if(e==="selectionchange"||e==="keyup"||e==="keydown")return cl(Hn)}function yd(e,t){if(e==="click")return cl(t)}function vd(e,t){if(e==="input"||e==="change")return cl(t)}function gd(e,t){return e===t&&(e!==0||1/e===1/t)||e!==e&&t!==t}var $e=typeof Object.is=="function"?Object.is:gd;function Wn(e,t){if($e(e,t))return!0;if(typeof e!="object"||e===null||typeof t!="object"||t===null)return!1;var n=Object.keys(e),r=Object.keys(t);if(n.length!==r.length)return!1;for(r=0;r<n.length;r++){var l=n[r];if(!bl.call(t,l)||!$e(e[l],t[l]))return!1}return!0}function pu(e){for(;e&&e.firstChild;)e=e.firstChild;return e}function mu(e,t){var n=pu(e);e=0;for(var r;n;){if(n.nodeType===3){if(r=e+n.textContent.length,e<=t&&r>=t)return{node:n,offset:t-e};e=r}e:{for(;n;){if(n.nextSibling){n=n.nextSibling;break e}n=n.parentNode}n=void 0}n=pu(n)}}function qs(e,t){return e&&t?e===t?!0:e&&e.nodeType===3?!1:t&&t.nodeType===3?qs(e,t.parentNode):"contains"in e?e.contains(t):e.compareDocumentPosition?!!(e.compareDocumentPosition(t)&16):!1:!1}function Js(){for(var e=window,t=Mr();t instanceof e.HTMLIFrameElement;){try{var n=typeof t.contentWindow.location.href=="string"}catch{n=!1}if(n)e=t.contentWindow;else break;t=Mr(e.document)}return t}function co(e){var t=e&&e.nodeName&&e.nodeName.toLowerCase();return t&&(t==="input"&&(e.type==="text"||e.type==="search"||e.type==="tel"||e.type==="url"||e.type==="password")||t==="textarea"||e.contentEditable==="true")}function wd(e){var t=Js(),n=e.focusedElem,r=e.selectionRange;if(t!==n&&n&&n.ownerDocument&&qs(n.ownerDocument.documentElement,n)){if(r!==null&&co(n)){if(t=r.start,e=r.end,e===void 0&&(e=t),"selectionStart"in n)n.selectionStart=t,n.selectionEnd=Math.min(e,n.value.length);else if(e=(t=n.ownerDocument||document)&&t.defaultView||window,e.getSelection){e=e.getSelection();var l=n.textContent.length,i=Math.min(r.start,l);r=r.end===void 0?i:Math.min(r.end,l),!e.extend&&i>r&&(l=r,r=i,i=l),l=mu(n,i);var o=mu(n,r);l&&o&&(e.rangeCount!==1||e.anchorNode!==l.node||e.anchorOffset!==l.offset||e.focusNode!==o.node||e.focusOffset!==o.offset)&&(t=t.createRange(),t.setStart(l.node,l.offset),e.removeAllRanges(),i>r?(e.addRange(t),e.extend(o.node,o.offset)):(t.setEnd(o.node,o.offset),e.addRange(t)))}}for(t=[],e=n;e=e.parentNode;)e.nodeType===1&&t.push({element:e,left:e.scrollLeft,top:e.scrollTop});for(typeof n.focus=="function"&&n.focus(),n=0;n<t.length;n++)e=t[n],e.element.scrollLeft=e.left,e.element.scrollTop=e.top}}var Sd=Ze&&"documentMode"in document&&11>=document.documentMode,Vt=null,gi=null,In=null,wi=!1;function hu(e,t,n){var r=n.window===n?n.document:n.nodeType===9?n:n.ownerDocument;wi||Vt==null||Vt!==Mr(r)||(r=Vt,"selectionStart"in r&&co(r)?r={start:r.selectionStart,end:r.selectionEnd}:(r=(r.ownerDocument&&r.ownerDocument.defaultView||window).getSelection(),r={anchorNode:r.anchorNode,anchorOffset:r.anchorOffset,focusNode:r.focusNode,focusOffset:r.focusOffset}),In&&Wn(In,r)||(In=r,r=Hr(gi,"onSelect"),0<r.length&&(t=new uo("onSelect","select",null,t,n),e.push({event:t,listeners:r}),t.target=Vt)))}function yr(e,t){var n={};return n[e.toLowerCase()]=t.toLowerCase(),n["Webkit"+e]="webkit"+t,n["Moz"+e]="moz"+t,n}var Bt={animationend:yr("Animation","AnimationEnd"),animationiteration:yr("Animation","AnimationIteration"),animationstart:yr("Animation","AnimationStart"),transitionend:yr("Transition","TransitionEnd")},Ml={},bs={};Ze&&(bs=document.createElement("div").style,"AnimationEvent"in window||(delete Bt.animationend.animation,delete Bt.animationiteration.animation,delete Bt.animationstart.animation),"TransitionEvent"in window||delete Bt.transitionend.transition);function fl(e){if(Ml[e])return Ml[e];if(!Bt[e])return e;var t=Bt[e],n;for(n in t)if(t.hasOwnProperty(n)&&n in bs)return Ml[e]=t[n];return e}var ea=fl("animationend"),ta=fl("animationiteration"),na=fl("animationstart"),ra=fl("transitionend"),la=new Map,yu="abort auxClick cancel canPlay canPlayThrough click close contextMenu copy cut drag dragEnd dragEnter dragExit dragLeave dragOver dragStart drop durationChange emptied encrypted ended error gotPointerCapture input invalid keyDown keyPress keyUp load loadedData loadedMetadata loadStart lostPointerCapture mouseDown mouseMove mouseOut mouseOver mouseUp paste pause play playing pointerCancel pointerDown pointerMove pointerOut pointerOver pointerUp progress rateChange reset resize seeked seeking stalled submit suspend timeUpdate touchCancel touchEnd touchStart volumeChange scroll toggle touchMove waiting wheel".split(" ");function gt(e,t){la.set(e,t),Rt(t,[e])}for(var Dl=0;Dl<yu.length;Dl++){var $l=yu[Dl],xd=$l.toLowerCase(),kd=$l[0].toUpperCase()+$l.slice(1);gt(xd,"on"+kd)}gt(ea,"onAnimationEnd");gt(ta,"onAnimationIteration");gt(na,"onAnimationStart");gt("dblclick","onDoubleClick");gt("focusin","onFocus");gt("focusout","onBlur");gt(ra,"onTransitionEnd");nn("onMouseEnter",["mouseout","mouseover"]);nn("onMouseLeave",["mouseout","mouseover"]);nn("onPointerEnter",["pointerout","pointerover"]);nn("onPointerLeave",["pointerout","pointerover"]);Rt("onChange","change click focusin focusout input keydown keyup selectionchange".split(" "));Rt("onSelect","focusout contextmenu dragend focusin keydown keyup mousedown mouseup selectionchange".split(" "));Rt("onBeforeInput",["compositionend","keypress","textInput","paste"]);Rt("onCompositionEnd","compositionend focusout keydown keypress keyup mousedown".split(" "));Rt("onCompositionStart","compositionstart focusout keydown keypress keyup mousedown".split(" "));Rt("onCompositionUpdate","compositionupdate focusout keydown keypress keyup mousedown".split(" "));var Tn="abort canplay canplaythrough durationchange emptied encrypted ended error loadeddata loadedmetadata loadstart pause play playing progress ratechange resize seeked seeking stalled suspend timeupdate volumechange waiting".split(" "),Ed=new Set("cancel close invalid load scroll toggle".split(" ").concat(Tn));function vu(e,t,n){var r=e.type||"unknown-event";e.currentTarget=n,xf(r,t,void 0,e),e.currentTarget=null}function ia(e,t){t=(t&4)!==0;for(var n=0;n<e.length;n++){var r=e[n],l=r.event;r=r.listeners;e:{var i=void 0;if(t)for(var o=r.length-1;0<=o;o--){var u=r[o],s=u.instance,c=u.currentTarget;if(u=u.listener,s!==i&&l.isPropagationStopped())break e;vu(l,u,c),i=s}else for(o=0;o<r.length;o++){if(u=r[o],s=u.instance,c=u.currentTarget,u=u.listener,s!==i&&l.isPropagationStopped())break e;vu(l,u,c),i=s}}}if($r)throw e=mi,$r=!1,mi=null,e}function $(e,t){var n=t[Ci];n===void 0&&(n=t[Ci]=new Set);var r=e+"__bubble";n.has(r)||(oa(t,e,2,!1),n.add(r))}function Ul(e,t,n){var r=0;t&&(r|=4),oa(n,e,r,t)}var vr="_reactListening"+Math.random().toString(36).slice(2);function Kn(e){if(!e[vr]){e[vr]=!0,ps.forEach(function(n){n!=="selectionchange"&&(Ed.has(n)||Ul(n,!1,e),Ul(n,!0,e))});var t=e.nodeType===9?e:e.ownerDocument;t===null||t[vr]||(t[vr]=!0,Ul("selectionchange",!1,t))}}function oa(e,t,n,r){switch(Qs(t)){case 1:var l=Mf;break;case 4:l=Df;break;default:l=io}n=l.bind(null,t,n,e),l=void 0,!pi||t!=="touchstart"&&t!=="touchmove"&&t!=="wheel"||(l=!0),r?l!==void 0?e.addEventListener(t,n,{capture:!0,passive:l}):e.addEventListener(t,n,!0):l!==void 0?e.addEventListener(t,n,{passive:l}):e.addEventListener(t,n,!1)}function Vl(e,t,n,r,l){var i=r;if(!(t&1)&&!(t&2)&&r!==null)e:for(;;){if(r===null)return;var o=r.tag;if(o===3||o===4){var u=r.stateNode.containerInfo;if(u===l||u.nodeType===8&&u.parentNode===l)break;if(o===4)for(o=r.return;o!==null;){var s=o.tag;if((s===3||s===4)&&(s=o.stateNode.containerInfo,s===l||s.nodeType===8&&s.parentNode===l))return;o=o.return}for(;u!==null;){if(o=jt(u),o===null)return;if(s=o.tag,s===5||s===6){r=i=o;continue e}u=u.parentNode}}r=r.return}Os(function(){var c=i,h=to(n),f=[];e:{var v=la.get(e);if(v!==void 0){var g=uo,w=e;switch(e){case"keypress":if(Or(n)===0)break e;case"keydown":case"keyup":g=bf;break;case"focusin":w="focus",g=Fl;break;case"focusout":w="blur",g=Fl;break;case"beforeblur":case"afterblur":g=Fl;break;case"click":if(n.button===2)break e;case"auxclick":case"dblclick":case"mousedown":case"mousemove":case"mouseup":case"mouseout":case"mouseover":case"contextmenu":g=iu;break;case"drag":case"dragend":case"dragenter":case"dragexit":case"dragleave":case"dragover":case"dragstart":case"drop":g=Vf;break;case"touchcancel":case"touchend":case"touchmove":case"touchstart":g=nd;break;case ea:case ta:case na:g=Hf;break;case ra:g=ld;break;case"scroll":g=$f;break;case"wheel":g=od;break;case"copy":case"cut":case"paste":g=Kf;break;case"gotpointercapture":case"lostpointercapture":case"pointercancel":case"pointerdown":case"pointermove":case"pointerout":case"pointerover":case"pointerup":g=uu}var k=(t&4)!==0,M=!k&&e==="scroll",p=k?v!==null?v+"Capture":null:v;k=[];for(var d=c,y;d!==null;){y=d;var S=y.stateNode;if(y.tag===5&&S!==null&&(y=S,p!==null&&(S=Un(d,p),S!=null&&k.push(Yn(d,S,y)))),M)break;d=d.return}0<k.length&&(v=new g(v,w,null,n,h),f.push({event:v,listeners:k}))}}if(!(t&7)){e:{if(v=e==="mouseover"||e==="pointerover",g=e==="mouseout"||e==="pointerout",v&&n!==fi&&(w=n.relatedTarget||n.fromElement)&&(jt(w)||w[qe]))break e;if((g||v)&&(v=h.window===h?h:(v=h.ownerDocument)?v.defaultView||v.parentWindow:window,g?(w=n.relatedTarget||n.toElement,g=c,w=w?jt(w):null,w!==null&&(M=At(w),w!==M||w.tag!==5&&w.tag!==6)&&(w=null)):(g=null,w=c),g!==w)){if(k=iu,S="onMouseLeave",p="onMouseEnter",d="mouse",(e==="pointerout"||e==="pointerover")&&(k=uu,S="onPointerLeave",p="onPointerEnter",d="pointer"),M=g==null?v:Qt(g),y=w==null?v:Qt(w),v=new k(S,d+"leave",g,n,h),v.target=M,v.relatedTarget=y,S=null,jt(h)===c&&(k=new k(p,d+"enter",w,n,h),k.target=y,k.relatedTarget=M,S=k),M=S,g&&w)t:{for(k=g,p=w,d=0,y=k;y;y=Mt(y))d++;for(y=0,S=p;S;S=Mt(S))y++;for(;0<d-y;)k=Mt(k),d--;for(;0<y-d;)p=Mt(p),y--;for(;d--;){if(k===p||p!==null&&k===p.alternate)break t;k=Mt(k),p=Mt(p)}k=null}else k=null;g!==null&&gu(f,v,g,k,!1),w!==null&&M!==null&&gu(f,M,w,k,!0)}}e:{if(v=c?Qt(c):window,g=v.nodeName&&v.nodeName.toLowerCase(),g==="select"||g==="input"&&v.type==="file")var C=pd;else if(cu(v))if(Gs)C=vd;else{C=hd;var _=md}else(g=v.nodeName)&&g.toLowerCase()==="input"&&(v.type==="checkbox"||v.type==="radio")&&(C=yd);if(C&&(C=C(e,c))){Xs(f,C,n,h);break e}_&&_(e,v,c),e==="focusout"&&(_=v._wrapperState)&&_.controlled&&v.type==="number"&&oi(v,"number",v.value)}switch(_=c?Qt(c):window,e){case"focusin":(cu(_)||_.contentEditable==="true")&&(Vt=_,gi=c,In=null);break;case"focusout":In=gi=Vt=null;break;case"mousedown":wi=!0;break;case"contextmenu":case"mouseup":case"dragend":wi=!1,hu(f,n,h);break;case"selectionchange":if(Sd)break;case"keydown":case"keyup":hu(f,n,h)}var N;if(ao)e:{switch(e){case"compositionstart":var T="onCompositionStart";break e;case"compositionend":T="onCompositionEnd";break e;case"compositionupdate":T="onCompositionUpdate";break e}T=void 0}else Ut?Ks(e,n)&&(T="onCompositionEnd"):e==="keydown"&&n.keyCode===229&&(T="onCompositionStart");T&&(Ws&&n.locale!=="ko"&&(Ut||T!=="onCompositionStart"?T==="onCompositionEnd"&&Ut&&(N=Hs()):(ot=h,oo="value"in ot?ot.value:ot.textContent,Ut=!0)),_=Hr(c,T),0<_.length&&(T=new ou(T,e,null,n,h),f.push({event:T,listeners:_}),N?T.data=N:(N=Ys(n),N!==null&&(T.data=N)))),(N=sd?ad(e,n):cd(e,n))&&(c=Hr(c,"onBeforeInput"),0<c.length&&(h=new ou("onBeforeInput","beforeinput",null,n,h),f.push({event:h,listeners:c}),h.data=N))}ia(f,t)})}function Yn(e,t,n){return{instance:e,listener:t,currentTarget:n}}function Hr(e,t){for(var n=t+"Capture",r=[];e!==null;){var l=e,i=l.stateNode;l.tag===5&&i!==null&&(l=i,i=Un(e,n),i!=null&&r.unshift(Yn(e,i,l)),i=Un(e,t),i!=null&&r.push(Yn(e,i,l))),e=e.return}return r}function Mt(e){if(e===null)return null;do e=e.return;while(e&&e.tag!==5);return e||null}function gu(e,t,n,r,l){for(var i=t._reactName,o=[];n!==null&&n!==r;){var u=n,s=u.alternate,c=u.stateNode;if(s!==null&&s===r)break;u.tag===5&&c!==null&&(u=c,l?(s=Un(n,i),s!=null&&o.unshift(Yn(n,s,u))):l||(s=Un(n,i),s!=null&&o.push(Yn(n,s,u)))),n=n.return}o.length!==0&&e.push({event:t,listeners:o})}var Cd=/\r\n?/g,jd=/\u0000|\uFFFD/g;function wu(e){return(typeof e=="string"?e:""+e).replace(Cd,` -`).replace(jd,"")}function gr(e,t,n){if(t=wu(t),wu(e)!==t&&n)throw Error(x(425))}function Wr(){}var Si=null,xi=null;function ki(e,t){return e==="textarea"||e==="noscript"||typeof t.children=="string"||typeof t.children=="number"||typeof t.dangerouslySetInnerHTML=="object"&&t.dangerouslySetInnerHTML!==null&&t.dangerouslySetInnerHTML.__html!=null}var Ei=typeof setTimeout=="function"?setTimeout:void 0,_d=typeof clearTimeout=="function"?clearTimeout:void 0,Su=typeof Promise=="function"?Promise:void 0,Nd=typeof queueMicrotask=="function"?queueMicrotask:typeof Su<"u"?function(e){return Su.resolve(null).then(e).catch(Td)}:Ei;function Td(e){setTimeout(function(){throw e})}function Bl(e,t){var n=t,r=0;do{var l=n.nextSibling;if(e.removeChild(n),l&&l.nodeType===8)if(n=l.data,n==="/$"){if(r===0){e.removeChild(l),Qn(t);return}r--}else n!=="$"&&n!=="$?"&&n!=="$!"||r++;n=l}while(n);Qn(t)}function ft(e){for(;e!=null;e=e.nextSibling){var t=e.nodeType;if(t===1||t===3)break;if(t===8){if(t=e.data,t==="$"||t==="$!"||t==="$?")break;if(t==="/$")return null}}return e}function xu(e){e=e.previousSibling;for(var t=0;e;){if(e.nodeType===8){var n=e.data;if(n==="$"||n==="$!"||n==="$?"){if(t===0)return e;t--}else n==="/$"&&t++}e=e.previousSibling}return null}var dn=Math.random().toString(36).slice(2),Be="__reactFiber$"+dn,Xn="__reactProps$"+dn,qe="__reactContainer$"+dn,Ci="__reactEvents$"+dn,Od="__reactListeners$"+dn,Pd="__reactHandles$"+dn;function jt(e){var t=e[Be];if(t)return t;for(var n=e.parentNode;n;){if(t=n[qe]||n[Be]){if(n=t.alternate,t.child!==null||n!==null&&n.child!==null)for(e=xu(e);e!==null;){if(n=e[Be])return n;e=xu(e)}return t}e=n,n=e.parentNode}return null}function lr(e){return e=e[Be]||e[qe],!e||e.tag!==5&&e.tag!==6&&e.tag!==13&&e.tag!==3?null:e}function Qt(e){if(e.tag===5||e.tag===6)return e.stateNode;throw Error(x(33))}function dl(e){return e[Xn]||null}var ji=[],Ht=-1;function wt(e){return{current:e}}function U(e){0>Ht||(e.current=ji[Ht],ji[Ht]=null,Ht--)}function D(e,t){Ht++,ji[Ht]=e.current,e.current=t}var vt={},ce=wt(vt),ve=wt(!1),Pt=vt;function rn(e,t){var n=e.type.contextTypes;if(!n)return vt;var r=e.stateNode;if(r&&r.__reactInternalMemoizedUnmaskedChildContext===t)return r.__reactInternalMemoizedMaskedChildContext;var l={},i;for(i in n)l[i]=t[i];return r&&(e=e.stateNode,e.__reactInternalMemoizedUnmaskedChildContext=t,e.__reactInternalMemoizedMaskedChildContext=l),l}function ge(e){return e=e.childContextTypes,e!=null}function Kr(){U(ve),U(ce)}function ku(e,t,n){if(ce.current!==vt)throw Error(x(168));D(ce,t),D(ve,n)}function ua(e,t,n){var r=e.stateNode;if(t=t.childContextTypes,typeof r.getChildContext!="function")return n;r=r.getChildContext();for(var l in r)if(!(l in t))throw Error(x(108,mf(e)||"Unknown",l));return H({},n,r)}function Yr(e){return e=(e=e.stateNode)&&e.__reactInternalMemoizedMergedChildContext||vt,Pt=ce.current,D(ce,e),D(ve,ve.current),!0}function Eu(e,t,n){var r=e.stateNode;if(!r)throw Error(x(169));n?(e=ua(e,t,Pt),r.__reactInternalMemoizedMergedChildContext=e,U(ve),U(ce),D(ce,e)):U(ve),D(ve,n)}var Ke=null,pl=!1,Ql=!1;function sa(e){Ke===null?Ke=[e]:Ke.push(e)}function zd(e){pl=!0,sa(e)}function St(){if(!Ql&&Ke!==null){Ql=!0;var e=0,t=A;try{var n=Ke;for(A=1;e<n.length;e++){var r=n[e];do r=r(!0);while(r!==null)}Ke=null,pl=!1}catch(l){throw Ke!==null&&(Ke=Ke.slice(e+1)),Is(no,St),l}finally{A=t,Ql=!1}}return null}var Wt=[],Kt=0,Xr=null,Gr=0,_e=[],Ne=0,zt=null,Ye=1,Xe="";function Et(e,t){Wt[Kt++]=Gr,Wt[Kt++]=Xr,Xr=e,Gr=t}function aa(e,t,n){_e[Ne++]=Ye,_e[Ne++]=Xe,_e[Ne++]=zt,zt=e;var r=Ye;e=Xe;var l=32-Me(r)-1;r&=~(1<<l),n+=1;var i=32-Me(t)+l;if(30<i){var o=l-l%5;i=(r&(1<<o)-1).toString(32),r>>=o,l-=o,Ye=1<<32-Me(t)+l|n<<l|r,Xe=i+e}else Ye=1<<i|n<<l|r,Xe=e}function fo(e){e.return!==null&&(Et(e,1),aa(e,1,0))}function po(e){for(;e===Xr;)Xr=Wt[--Kt],Wt[Kt]=null,Gr=Wt[--Kt],Wt[Kt]=null;for(;e===zt;)zt=_e[--Ne],_e[Ne]=null,Xe=_e[--Ne],_e[Ne]=null,Ye=_e[--Ne],_e[Ne]=null}var ke=null,xe=null,V=!1,Ae=null;function ca(e,t){var n=Te(5,null,null,0);n.elementType="DELETED",n.stateNode=t,n.return=e,t=e.deletions,t===null?(e.deletions=[n],e.flags|=16):t.push(n)}function Cu(e,t){switch(e.tag){case 5:var n=e.type;return t=t.nodeType!==1||n.toLowerCase()!==t.nodeName.toLowerCase()?null:t,t!==null?(e.stateNode=t,ke=e,xe=ft(t.firstChild),!0):!1;case 6:return t=e.pendingProps===""||t.nodeType!==3?null:t,t!==null?(e.stateNode=t,ke=e,xe=null,!0):!1;case 13:return t=t.nodeType!==8?null:t,t!==null?(n=zt!==null?{id:Ye,overflow:Xe}:null,e.memoizedState={dehydrated:t,treeContext:n,retryLane:1073741824},n=Te(18,null,null,0),n.stateNode=t,n.return=e,e.child=n,ke=e,xe=null,!0):!1;default:return!1}}function _i(e){return(e.mode&1)!==0&&(e.flags&128)===0}function Ni(e){if(V){var t=xe;if(t){var n=t;if(!Cu(e,t)){if(_i(e))throw Error(x(418));t=ft(n.nextSibling);var r=ke;t&&Cu(e,t)?ca(r,n):(e.flags=e.flags&-4097|2,V=!1,ke=e)}}else{if(_i(e))throw Error(x(418));e.flags=e.flags&-4097|2,V=!1,ke=e}}}function ju(e){for(e=e.return;e!==null&&e.tag!==5&&e.tag!==3&&e.tag!==13;)e=e.return;ke=e}function wr(e){if(e!==ke)return!1;if(!V)return ju(e),V=!0,!1;var t;if((t=e.tag!==3)&&!(t=e.tag!==5)&&(t=e.type,t=t!=="head"&&t!=="body"&&!ki(e.type,e.memoizedProps)),t&&(t=xe)){if(_i(e))throw fa(),Error(x(418));for(;t;)ca(e,t),t=ft(t.nextSibling)}if(ju(e),e.tag===13){if(e=e.memoizedState,e=e!==null?e.dehydrated:null,!e)throw Error(x(317));e:{for(e=e.nextSibling,t=0;e;){if(e.nodeType===8){var n=e.data;if(n==="/$"){if(t===0){xe=ft(e.nextSibling);break e}t--}else n!=="$"&&n!=="$!"&&n!=="$?"||t++}e=e.nextSibling}xe=null}}else xe=ke?ft(e.stateNode.nextSibling):null;return!0}function fa(){for(var e=xe;e;)e=ft(e.nextSibling)}function ln(){xe=ke=null,V=!1}function mo(e){Ae===null?Ae=[e]:Ae.push(e)}var Ld=et.ReactCurrentBatchConfig;function Fe(e,t){if(e&&e.defaultProps){t=H({},t),e=e.defaultProps;for(var n in e)t[n]===void 0&&(t[n]=e[n]);return t}return t}var Zr=wt(null),qr=null,Yt=null,ho=null;function yo(){ho=Yt=qr=null}function vo(e){var t=Zr.current;U(Zr),e._currentValue=t}function Ti(e,t,n){for(;e!==null;){var r=e.alternate;if((e.childLanes&t)!==t?(e.childLanes|=t,r!==null&&(r.childLanes|=t)):r!==null&&(r.childLanes&t)!==t&&(r.childLanes|=t),e===n)break;e=e.return}}function en(e,t){qr=e,ho=Yt=null,e=e.dependencies,e!==null&&e.firstContext!==null&&(e.lanes&t&&(ye=!0),e.firstContext=null)}function Pe(e){var t=e._currentValue;if(ho!==e)if(e={context:e,memoizedValue:t,next:null},Yt===null){if(qr===null)throw Error(x(308));Yt=e,qr.dependencies={lanes:0,firstContext:e}}else Yt=Yt.next=e;return t}var _t=null;function go(e){_t===null?_t=[e]:_t.push(e)}function da(e,t,n,r){var l=t.interleaved;return l===null?(n.next=n,go(t)):(n.next=l.next,l.next=n),t.interleaved=n,Je(e,r)}function Je(e,t){e.lanes|=t;var n=e.alternate;for(n!==null&&(n.lanes|=t),n=e,e=e.return;e!==null;)e.childLanes|=t,n=e.alternate,n!==null&&(n.childLanes|=t),n=e,e=e.return;return n.tag===3?n.stateNode:null}var rt=!1;function wo(e){e.updateQueue={baseState:e.memoizedState,firstBaseUpdate:null,lastBaseUpdate:null,shared:{pending:null,interleaved:null,lanes:0},effects:null}}function pa(e,t){e=e.updateQueue,t.updateQueue===e&&(t.updateQueue={baseState:e.baseState,firstBaseUpdate:e.firstBaseUpdate,lastBaseUpdate:e.lastBaseUpdate,shared:e.shared,effects:e.effects})}function Ge(e,t){return{eventTime:e,lane:t,tag:0,payload:null,callback:null,next:null}}function dt(e,t,n){var r=e.updateQueue;if(r===null)return null;if(r=r.shared,R&2){var l=r.pending;return l===null?t.next=t:(t.next=l.next,l.next=t),r.pending=t,Je(e,n)}return l=r.interleaved,l===null?(t.next=t,go(r)):(t.next=l.next,l.next=t),r.interleaved=t,Je(e,n)}function Pr(e,t,n){if(t=t.updateQueue,t!==null&&(t=t.shared,(n&4194240)!==0)){var r=t.lanes;r&=e.pendingLanes,n|=r,t.lanes=n,ro(e,n)}}function _u(e,t){var n=e.updateQueue,r=e.alternate;if(r!==null&&(r=r.updateQueue,n===r)){var l=null,i=null;if(n=n.firstBaseUpdate,n!==null){do{var o={eventTime:n.eventTime,lane:n.lane,tag:n.tag,payload:n.payload,callback:n.callback,next:null};i===null?l=i=o:i=i.next=o,n=n.next}while(n!==null);i===null?l=i=t:i=i.next=t}else l=i=t;n={baseState:r.baseState,firstBaseUpdate:l,lastBaseUpdate:i,shared:r.shared,effects:r.effects},e.updateQueue=n;return}e=n.lastBaseUpdate,e===null?n.firstBaseUpdate=t:e.next=t,n.lastBaseUpdate=t}function Jr(e,t,n,r){var l=e.updateQueue;rt=!1;var i=l.firstBaseUpdate,o=l.lastBaseUpdate,u=l.shared.pending;if(u!==null){l.shared.pending=null;var s=u,c=s.next;s.next=null,o===null?i=c:o.next=c,o=s;var h=e.alternate;h!==null&&(h=h.updateQueue,u=h.lastBaseUpdate,u!==o&&(u===null?h.firstBaseUpdate=c:u.next=c,h.lastBaseUpdate=s))}if(i!==null){var f=l.baseState;o=0,h=c=s=null,u=i;do{var v=u.lane,g=u.eventTime;if((r&v)===v){h!==null&&(h=h.next={eventTime:g,lane:0,tag:u.tag,payload:u.payload,callback:u.callback,next:null});e:{var w=e,k=u;switch(v=t,g=n,k.tag){case 1:if(w=k.payload,typeof w=="function"){f=w.call(g,f,v);break e}f=w;break e;case 3:w.flags=w.flags&-65537|128;case 0:if(w=k.payload,v=typeof w=="function"?w.call(g,f,v):w,v==null)break e;f=H({},f,v);break e;case 2:rt=!0}}u.callback!==null&&u.lane!==0&&(e.flags|=64,v=l.effects,v===null?l.effects=[u]:v.push(u))}else g={eventTime:g,lane:v,tag:u.tag,payload:u.payload,callback:u.callback,next:null},h===null?(c=h=g,s=f):h=h.next=g,o|=v;if(u=u.next,u===null){if(u=l.shared.pending,u===null)break;v=u,u=v.next,v.next=null,l.lastBaseUpdate=v,l.shared.pending=null}}while(1);if(h===null&&(s=f),l.baseState=s,l.firstBaseUpdate=c,l.lastBaseUpdate=h,t=l.shared.interleaved,t!==null){l=t;do o|=l.lane,l=l.next;while(l!==t)}else i===null&&(l.shared.lanes=0);It|=o,e.lanes=o,e.memoizedState=f}}function Nu(e,t,n){if(e=t.effects,t.effects=null,e!==null)for(t=0;t<e.length;t++){var r=e[t],l=r.callback;if(l!==null){if(r.callback=null,r=n,typeof l!="function")throw Error(x(191,l));l.call(r)}}}var ma=new ds.Component().refs;function Oi(e,t,n,r){t=e.memoizedState,n=n(r,t),n=n==null?t:H({},t,n),e.memoizedState=n,e.lanes===0&&(e.updateQueue.baseState=n)}var ml={isMounted:function(e){return(e=e._reactInternals)?At(e)===e:!1},enqueueSetState:function(e,t,n){e=e._reactInternals;var r=de(),l=mt(e),i=Ge(r,l);i.payload=t,n!=null&&(i.callback=n),t=dt(e,i,l),t!==null&&(De(t,e,l,r),Pr(t,e,l))},enqueueReplaceState:function(e,t,n){e=e._reactInternals;var r=de(),l=mt(e),i=Ge(r,l);i.tag=1,i.payload=t,n!=null&&(i.callback=n),t=dt(e,i,l),t!==null&&(De(t,e,l,r),Pr(t,e,l))},enqueueForceUpdate:function(e,t){e=e._reactInternals;var n=de(),r=mt(e),l=Ge(n,r);l.tag=2,t!=null&&(l.callback=t),t=dt(e,l,r),t!==null&&(De(t,e,r,n),Pr(t,e,r))}};function Tu(e,t,n,r,l,i,o){return e=e.stateNode,typeof e.shouldComponentUpdate=="function"?e.shouldComponentUpdate(r,i,o):t.prototype&&t.prototype.isPureReactComponent?!Wn(n,r)||!Wn(l,i):!0}function ha(e,t,n){var r=!1,l=vt,i=t.contextType;return typeof i=="object"&&i!==null?i=Pe(i):(l=ge(t)?Pt:ce.current,r=t.contextTypes,i=(r=r!=null)?rn(e,l):vt),t=new t(n,i),e.memoizedState=t.state!==null&&t.state!==void 0?t.state:null,t.updater=ml,e.stateNode=t,t._reactInternals=e,r&&(e=e.stateNode,e.__reactInternalMemoizedUnmaskedChildContext=l,e.__reactInternalMemoizedMaskedChildContext=i),t}function Ou(e,t,n,r){e=t.state,typeof t.componentWillReceiveProps=="function"&&t.componentWillReceiveProps(n,r),typeof t.UNSAFE_componentWillReceiveProps=="function"&&t.UNSAFE_componentWillReceiveProps(n,r),t.state!==e&&ml.enqueueReplaceState(t,t.state,null)}function Pi(e,t,n,r){var l=e.stateNode;l.props=n,l.state=e.memoizedState,l.refs=ma,wo(e);var i=t.contextType;typeof i=="object"&&i!==null?l.context=Pe(i):(i=ge(t)?Pt:ce.current,l.context=rn(e,i)),l.state=e.memoizedState,i=t.getDerivedStateFromProps,typeof i=="function"&&(Oi(e,t,i,n),l.state=e.memoizedState),typeof t.getDerivedStateFromProps=="function"||typeof l.getSnapshotBeforeUpdate=="function"||typeof l.UNSAFE_componentWillMount!="function"&&typeof l.componentWillMount!="function"||(t=l.state,typeof l.componentWillMount=="function"&&l.componentWillMount(),typeof l.UNSAFE_componentWillMount=="function"&&l.UNSAFE_componentWillMount(),t!==l.state&&ml.enqueueReplaceState(l,l.state,null),Jr(e,n,l,r),l.state=e.memoizedState),typeof l.componentDidMount=="function"&&(e.flags|=4194308)}function kn(e,t,n){if(e=n.ref,e!==null&&typeof e!="function"&&typeof e!="object"){if(n._owner){if(n=n._owner,n){if(n.tag!==1)throw Error(x(309));var r=n.stateNode}if(!r)throw Error(x(147,e));var l=r,i=""+e;return t!==null&&t.ref!==null&&typeof t.ref=="function"&&t.ref._stringRef===i?t.ref:(t=function(o){var u=l.refs;u===ma&&(u=l.refs={}),o===null?delete u[i]:u[i]=o},t._stringRef=i,t)}if(typeof e!="string")throw Error(x(284));if(!n._owner)throw Error(x(290,e))}return e}function Sr(e,t){throw e=Object.prototype.toString.call(t),Error(x(31,e==="[object Object]"?"object with keys {"+Object.keys(t).join(", ")+"}":e))}function Pu(e){var t=e._init;return t(e._payload)}function ya(e){function t(p,d){if(e){var y=p.deletions;y===null?(p.deletions=[d],p.flags|=16):y.push(d)}}function n(p,d){if(!e)return null;for(;d!==null;)t(p,d),d=d.sibling;return null}function r(p,d){for(p=new Map;d!==null;)d.key!==null?p.set(d.key,d):p.set(d.index,d),d=d.sibling;return p}function l(p,d){return p=ht(p,d),p.index=0,p.sibling=null,p}function i(p,d,y){return p.index=y,e?(y=p.alternate,y!==null?(y=y.index,y<d?(p.flags|=2,d):y):(p.flags|=2,d)):(p.flags|=1048576,d)}function o(p){return e&&p.alternate===null&&(p.flags|=2),p}function u(p,d,y,S){return d===null||d.tag!==6?(d=Zl(y,p.mode,S),d.return=p,d):(d=l(d,y),d.return=p,d)}function s(p,d,y,S){var C=y.type;return C===$t?h(p,d,y.props.children,S,y.key):d!==null&&(d.elementType===C||typeof C=="object"&&C!==null&&C.$$typeof===nt&&Pu(C)===d.type)?(S=l(d,y.props),S.ref=kn(p,d,y),S.return=p,S):(S=Ar(y.type,y.key,y.props,null,p.mode,S),S.ref=kn(p,d,y),S.return=p,S)}function c(p,d,y,S){return d===null||d.tag!==4||d.stateNode.containerInfo!==y.containerInfo||d.stateNode.implementation!==y.implementation?(d=ql(y,p.mode,S),d.return=p,d):(d=l(d,y.children||[]),d.return=p,d)}function h(p,d,y,S,C){return d===null||d.tag!==7?(d=Ot(y,p.mode,S,C),d.return=p,d):(d=l(d,y),d.return=p,d)}function f(p,d,y){if(typeof d=="string"&&d!==""||typeof d=="number")return d=Zl(""+d,p.mode,y),d.return=p,d;if(typeof d=="object"&&d!==null){switch(d.$$typeof){case ar:return y=Ar(d.type,d.key,d.props,null,p.mode,y),y.ref=kn(p,null,d),y.return=p,y;case Dt:return d=ql(d,p.mode,y),d.return=p,d;case nt:var S=d._init;return f(p,S(d._payload),y)}if(_n(d)||vn(d))return d=Ot(d,p.mode,y,null),d.return=p,d;Sr(p,d)}return null}function v(p,d,y,S){var C=d!==null?d.key:null;if(typeof y=="string"&&y!==""||typeof y=="number")return C!==null?null:u(p,d,""+y,S);if(typeof y=="object"&&y!==null){switch(y.$$typeof){case ar:return y.key===C?s(p,d,y,S):null;case Dt:return y.key===C?c(p,d,y,S):null;case nt:return C=y._init,v(p,d,C(y._payload),S)}if(_n(y)||vn(y))return C!==null?null:h(p,d,y,S,null);Sr(p,y)}return null}function g(p,d,y,S,C){if(typeof S=="string"&&S!==""||typeof S=="number")return p=p.get(y)||null,u(d,p,""+S,C);if(typeof S=="object"&&S!==null){switch(S.$$typeof){case ar:return p=p.get(S.key===null?y:S.key)||null,s(d,p,S,C);case Dt:return p=p.get(S.key===null?y:S.key)||null,c(d,p,S,C);case nt:var _=S._init;return g(p,d,y,_(S._payload),C)}if(_n(S)||vn(S))return p=p.get(y)||null,h(d,p,S,C,null);Sr(d,S)}return null}function w(p,d,y,S){for(var C=null,_=null,N=d,T=d=0,Y=null;N!==null&&T<y.length;T++){N.index>T?(Y=N,N=null):Y=N.sibling;var F=v(p,N,y[T],S);if(F===null){N===null&&(N=Y);break}e&&N&&F.alternate===null&&t(p,N),d=i(F,d,T),_===null?C=F:_.sibling=F,_=F,N=Y}if(T===y.length)return n(p,N),V&&Et(p,T),C;if(N===null){for(;T<y.length;T++)N=f(p,y[T],S),N!==null&&(d=i(N,d,T),_===null?C=N:_.sibling=N,_=N);return V&&Et(p,T),C}for(N=r(p,N);T<y.length;T++)Y=g(N,p,T,y[T],S),Y!==null&&(e&&Y.alternate!==null&&N.delete(Y.key===null?T:Y.key),d=i(Y,d,T),_===null?C=Y:_.sibling=Y,_=Y);return e&&N.forEach(function(Le){return t(p,Le)}),V&&Et(p,T),C}function k(p,d,y,S){var C=vn(y);if(typeof C!="function")throw Error(x(150));if(y=C.call(y),y==null)throw Error(x(151));for(var _=C=null,N=d,T=d=0,Y=null,F=y.next();N!==null&&!F.done;T++,F=y.next()){N.index>T?(Y=N,N=null):Y=N.sibling;var Le=v(p,N,F.value,S);if(Le===null){N===null&&(N=Y);break}e&&N&&Le.alternate===null&&t(p,N),d=i(Le,d,T),_===null?C=Le:_.sibling=Le,_=Le,N=Y}if(F.done)return n(p,N),V&&Et(p,T),C;if(N===null){for(;!F.done;T++,F=y.next())F=f(p,F.value,S),F!==null&&(d=i(F,d,T),_===null?C=F:_.sibling=F,_=F);return V&&Et(p,T),C}for(N=r(p,N);!F.done;T++,F=y.next())F=g(N,p,T,F.value,S),F!==null&&(e&&F.alternate!==null&&N.delete(F.key===null?T:F.key),d=i(F,d,T),_===null?C=F:_.sibling=F,_=F);return e&&N.forEach(function(mn){return t(p,mn)}),V&&Et(p,T),C}function M(p,d,y,S){if(typeof y=="object"&&y!==null&&y.type===$t&&y.key===null&&(y=y.props.children),typeof y=="object"&&y!==null){switch(y.$$typeof){case ar:e:{for(var C=y.key,_=d;_!==null;){if(_.key===C){if(C=y.type,C===$t){if(_.tag===7){n(p,_.sibling),d=l(_,y.props.children),d.return=p,p=d;break e}}else if(_.elementType===C||typeof C=="object"&&C!==null&&C.$$typeof===nt&&Pu(C)===_.type){n(p,_.sibling),d=l(_,y.props),d.ref=kn(p,_,y),d.return=p,p=d;break e}n(p,_);break}else t(p,_);_=_.sibling}y.type===$t?(d=Ot(y.props.children,p.mode,S,y.key),d.return=p,p=d):(S=Ar(y.type,y.key,y.props,null,p.mode,S),S.ref=kn(p,d,y),S.return=p,p=S)}return o(p);case Dt:e:{for(_=y.key;d!==null;){if(d.key===_)if(d.tag===4&&d.stateNode.containerInfo===y.containerInfo&&d.stateNode.implementation===y.implementation){n(p,d.sibling),d=l(d,y.children||[]),d.return=p,p=d;break e}else{n(p,d);break}else t(p,d);d=d.sibling}d=ql(y,p.mode,S),d.return=p,p=d}return o(p);case nt:return _=y._init,M(p,d,_(y._payload),S)}if(_n(y))return w(p,d,y,S);if(vn(y))return k(p,d,y,S);Sr(p,y)}return typeof y=="string"&&y!==""||typeof y=="number"?(y=""+y,d!==null&&d.tag===6?(n(p,d.sibling),d=l(d,y),d.return=p,p=d):(n(p,d),d=Zl(y,p.mode,S),d.return=p,p=d),o(p)):n(p,d)}return M}var on=ya(!0),va=ya(!1),ir={},He=wt(ir),Gn=wt(ir),Zn=wt(ir);function Nt(e){if(e===ir)throw Error(x(174));return e}function So(e,t){switch(D(Zn,t),D(Gn,e),D(He,ir),e=t.nodeType,e){case 9:case 11:t=(t=t.documentElement)?t.namespaceURI:si(null,"");break;default:e=e===8?t.parentNode:t,t=e.namespaceURI||null,e=e.tagName,t=si(t,e)}U(He),D(He,t)}function un(){U(He),U(Gn),U(Zn)}function ga(e){Nt(Zn.current);var t=Nt(He.current),n=si(t,e.type);t!==n&&(D(Gn,e),D(He,n))}function xo(e){Gn.current===e&&(U(He),U(Gn))}var B=wt(0);function br(e){for(var t=e;t!==null;){if(t.tag===13){var n=t.memoizedState;if(n!==null&&(n=n.dehydrated,n===null||n.data==="$?"||n.data==="$!"))return t}else if(t.tag===19&&t.memoizedProps.revealOrder!==void 0){if(t.flags&128)return t}else if(t.child!==null){t.child.return=t,t=t.child;continue}if(t===e)break;for(;t.sibling===null;){if(t.return===null||t.return===e)return null;t=t.return}t.sibling.return=t.return,t=t.sibling}return null}var Hl=[];function ko(){for(var e=0;e<Hl.length;e++)Hl[e]._workInProgressVersionPrimary=null;Hl.length=0}var zr=et.ReactCurrentDispatcher,Wl=et.ReactCurrentBatchConfig,Lt=0,Q=null,J=null,ne=null,el=!1,Fn=!1,qn=0,Id=0;function ue(){throw Error(x(321))}function Eo(e,t){if(t===null)return!1;for(var n=0;n<t.length&&n<e.length;n++)if(!$e(e[n],t[n]))return!1;return!0}function Co(e,t,n,r,l,i){if(Lt=i,Q=t,t.memoizedState=null,t.updateQueue=null,t.lanes=0,zr.current=e===null||e.memoizedState===null?Md:Dd,e=n(r,l),Fn){i=0;do{if(Fn=!1,qn=0,25<=i)throw Error(x(301));i+=1,ne=J=null,t.updateQueue=null,zr.current=$d,e=n(r,l)}while(Fn)}if(zr.current=tl,t=J!==null&&J.next!==null,Lt=0,ne=J=Q=null,el=!1,t)throw Error(x(300));return e}function jo(){var e=qn!==0;return qn=0,e}function Ve(){var e={memoizedState:null,baseState:null,baseQueue:null,queue:null,next:null};return ne===null?Q.memoizedState=ne=e:ne=ne.next=e,ne}function ze(){if(J===null){var e=Q.alternate;e=e!==null?e.memoizedState:null}else e=J.next;var t=ne===null?Q.memoizedState:ne.next;if(t!==null)ne=t,J=e;else{if(e===null)throw Error(x(310));J=e,e={memoizedState:J.memoizedState,baseState:J.baseState,baseQueue:J.baseQueue,queue:J.queue,next:null},ne===null?Q.memoizedState=ne=e:ne=ne.next=e}return ne}function Jn(e,t){return typeof t=="function"?t(e):t}function Kl(e){var t=ze(),n=t.queue;if(n===null)throw Error(x(311));n.lastRenderedReducer=e;var r=J,l=r.baseQueue,i=n.pending;if(i!==null){if(l!==null){var o=l.next;l.next=i.next,i.next=o}r.baseQueue=l=i,n.pending=null}if(l!==null){i=l.next,r=r.baseState;var u=o=null,s=null,c=i;do{var h=c.lane;if((Lt&h)===h)s!==null&&(s=s.next={lane:0,action:c.action,hasEagerState:c.hasEagerState,eagerState:c.eagerState,next:null}),r=c.hasEagerState?c.eagerState:e(r,c.action);else{var f={lane:h,action:c.action,hasEagerState:c.hasEagerState,eagerState:c.eagerState,next:null};s===null?(u=s=f,o=r):s=s.next=f,Q.lanes|=h,It|=h}c=c.next}while(c!==null&&c!==i);s===null?o=r:s.next=u,$e(r,t.memoizedState)||(ye=!0),t.memoizedState=r,t.baseState=o,t.baseQueue=s,n.lastRenderedState=r}if(e=n.interleaved,e!==null){l=e;do i=l.lane,Q.lanes|=i,It|=i,l=l.next;while(l!==e)}else l===null&&(n.lanes=0);return[t.memoizedState,n.dispatch]}function Yl(e){var t=ze(),n=t.queue;if(n===null)throw Error(x(311));n.lastRenderedReducer=e;var r=n.dispatch,l=n.pending,i=t.memoizedState;if(l!==null){n.pending=null;var o=l=l.next;do i=e(i,o.action),o=o.next;while(o!==l);$e(i,t.memoizedState)||(ye=!0),t.memoizedState=i,t.baseQueue===null&&(t.baseState=i),n.lastRenderedState=i}return[i,r]}function wa(){}function Sa(e,t){var n=Q,r=ze(),l=t(),i=!$e(r.memoizedState,l);if(i&&(r.memoizedState=l,ye=!0),r=r.queue,_o(Ea.bind(null,n,r,e),[e]),r.getSnapshot!==t||i||ne!==null&&ne.memoizedState.tag&1){if(n.flags|=2048,bn(9,ka.bind(null,n,r,l,t),void 0,null),re===null)throw Error(x(349));Lt&30||xa(n,t,l)}return l}function xa(e,t,n){e.flags|=16384,e={getSnapshot:t,value:n},t=Q.updateQueue,t===null?(t={lastEffect:null,stores:null},Q.updateQueue=t,t.stores=[e]):(n=t.stores,n===null?t.stores=[e]:n.push(e))}function ka(e,t,n,r){t.value=n,t.getSnapshot=r,Ca(t)&&ja(e)}function Ea(e,t,n){return n(function(){Ca(t)&&ja(e)})}function Ca(e){var t=e.getSnapshot;e=e.value;try{var n=t();return!$e(e,n)}catch{return!0}}function ja(e){var t=Je(e,1);t!==null&&De(t,e,1,-1)}function zu(e){var t=Ve();return typeof e=="function"&&(e=e()),t.memoizedState=t.baseState=e,e={pending:null,interleaved:null,lanes:0,dispatch:null,lastRenderedReducer:Jn,lastRenderedState:e},t.queue=e,e=e.dispatch=Ad.bind(null,Q,e),[t.memoizedState,e]}function bn(e,t,n,r){return e={tag:e,create:t,destroy:n,deps:r,next:null},t=Q.updateQueue,t===null?(t={lastEffect:null,stores:null},Q.updateQueue=t,t.lastEffect=e.next=e):(n=t.lastEffect,n===null?t.lastEffect=e.next=e:(r=n.next,n.next=e,e.next=r,t.lastEffect=e)),e}function _a(){return ze().memoizedState}function Lr(e,t,n,r){var l=Ve();Q.flags|=e,l.memoizedState=bn(1|t,n,void 0,r===void 0?null:r)}function hl(e,t,n,r){var l=ze();r=r===void 0?null:r;var i=void 0;if(J!==null){var o=J.memoizedState;if(i=o.destroy,r!==null&&Eo(r,o.deps)){l.memoizedState=bn(t,n,i,r);return}}Q.flags|=e,l.memoizedState=bn(1|t,n,i,r)}function Lu(e,t){return Lr(8390656,8,e,t)}function _o(e,t){return hl(2048,8,e,t)}function Na(e,t){return hl(4,2,e,t)}function Ta(e,t){return hl(4,4,e,t)}function Oa(e,t){if(typeof t=="function")return e=e(),t(e),function(){t(null)};if(t!=null)return e=e(),t.current=e,function(){t.current=null}}function Pa(e,t,n){return n=n!=null?n.concat([e]):null,hl(4,4,Oa.bind(null,t,e),n)}function No(){}function za(e,t){var n=ze();t=t===void 0?null:t;var r=n.memoizedState;return r!==null&&t!==null&&Eo(t,r[1])?r[0]:(n.memoizedState=[e,t],e)}function La(e,t){var n=ze();t=t===void 0?null:t;var r=n.memoizedState;return r!==null&&t!==null&&Eo(t,r[1])?r[0]:(e=e(),n.memoizedState=[e,t],e)}function Ia(e,t,n){return Lt&21?($e(n,t)||(n=As(),Q.lanes|=n,It|=n,e.baseState=!0),t):(e.baseState&&(e.baseState=!1,ye=!0),e.memoizedState=n)}function Fd(e,t){var n=A;A=n!==0&&4>n?n:4,e(!0);var r=Wl.transition;Wl.transition={};try{e(!1),t()}finally{A=n,Wl.transition=r}}function Fa(){return ze().memoizedState}function Rd(e,t,n){var r=mt(e);if(n={lane:r,action:n,hasEagerState:!1,eagerState:null,next:null},Ra(e))Aa(t,n);else if(n=da(e,t,n,r),n!==null){var l=de();De(n,e,r,l),Ma(n,t,r)}}function Ad(e,t,n){var r=mt(e),l={lane:r,action:n,hasEagerState:!1,eagerState:null,next:null};if(Ra(e))Aa(t,l);else{var i=e.alternate;if(e.lanes===0&&(i===null||i.lanes===0)&&(i=t.lastRenderedReducer,i!==null))try{var o=t.lastRenderedState,u=i(o,n);if(l.hasEagerState=!0,l.eagerState=u,$e(u,o)){var s=t.interleaved;s===null?(l.next=l,go(t)):(l.next=s.next,s.next=l),t.interleaved=l;return}}catch{}finally{}n=da(e,t,l,r),n!==null&&(l=de(),De(n,e,r,l),Ma(n,t,r))}}function Ra(e){var t=e.alternate;return e===Q||t!==null&&t===Q}function Aa(e,t){Fn=el=!0;var n=e.pending;n===null?t.next=t:(t.next=n.next,n.next=t),e.pending=t}function Ma(e,t,n){if(n&4194240){var r=t.lanes;r&=e.pendingLanes,n|=r,t.lanes=n,ro(e,n)}}var tl={readContext:Pe,useCallback:ue,useContext:ue,useEffect:ue,useImperativeHandle:ue,useInsertionEffect:ue,useLayoutEffect:ue,useMemo:ue,useReducer:ue,useRef:ue,useState:ue,useDebugValue:ue,useDeferredValue:ue,useTransition:ue,useMutableSource:ue,useSyncExternalStore:ue,useId:ue,unstable_isNewReconciler:!1},Md={readContext:Pe,useCallback:function(e,t){return Ve().memoizedState=[e,t===void 0?null:t],e},useContext:Pe,useEffect:Lu,useImperativeHandle:function(e,t,n){return n=n!=null?n.concat([e]):null,Lr(4194308,4,Oa.bind(null,t,e),n)},useLayoutEffect:function(e,t){return Lr(4194308,4,e,t)},useInsertionEffect:function(e,t){return Lr(4,2,e,t)},useMemo:function(e,t){var n=Ve();return t=t===void 0?null:t,e=e(),n.memoizedState=[e,t],e},useReducer:function(e,t,n){var r=Ve();return t=n!==void 0?n(t):t,r.memoizedState=r.baseState=t,e={pending:null,interleaved:null,lanes:0,dispatch:null,lastRenderedReducer:e,lastRenderedState:t},r.queue=e,e=e.dispatch=Rd.bind(null,Q,e),[r.memoizedState,e]},useRef:function(e){var t=Ve();return e={current:e},t.memoizedState=e},useState:zu,useDebugValue:No,useDeferredValue:function(e){return Ve().memoizedState=e},useTransition:function(){var e=zu(!1),t=e[0];return e=Fd.bind(null,e[1]),Ve().memoizedState=e,[t,e]},useMutableSource:function(){},useSyncExternalStore:function(e,t,n){var r=Q,l=Ve();if(V){if(n===void 0)throw Error(x(407));n=n()}else{if(n=t(),re===null)throw Error(x(349));Lt&30||xa(r,t,n)}l.memoizedState=n;var i={value:n,getSnapshot:t};return l.queue=i,Lu(Ea.bind(null,r,i,e),[e]),r.flags|=2048,bn(9,ka.bind(null,r,i,n,t),void 0,null),n},useId:function(){var e=Ve(),t=re.identifierPrefix;if(V){var n=Xe,r=Ye;n=(r&~(1<<32-Me(r)-1)).toString(32)+n,t=":"+t+"R"+n,n=qn++,0<n&&(t+="H"+n.toString(32)),t+=":"}else n=Id++,t=":"+t+"r"+n.toString(32)+":";return e.memoizedState=t},unstable_isNewReconciler:!1},Dd={readContext:Pe,useCallback:za,useContext:Pe,useEffect:_o,useImperativeHandle:Pa,useInsertionEffect:Na,useLayoutEffect:Ta,useMemo:La,useReducer:Kl,useRef:_a,useState:function(){return Kl(Jn)},useDebugValue:No,useDeferredValue:function(e){var t=ze();return Ia(t,J.memoizedState,e)},useTransition:function(){var e=Kl(Jn)[0],t=ze().memoizedState;return[e,t]},useMutableSource:wa,useSyncExternalStore:Sa,useId:Fa,unstable_isNewReconciler:!1},$d={readContext:Pe,useCallback:za,useContext:Pe,useEffect:_o,useImperativeHandle:Pa,useInsertionEffect:Na,useLayoutEffect:Ta,useMemo:La,useReducer:Yl,useRef:_a,useState:function(){return Yl(Jn)},useDebugValue:No,useDeferredValue:function(e){var t=ze();return J===null?t.memoizedState=e:Ia(t,J.memoizedState,e)},useTransition:function(){var e=Yl(Jn)[0],t=ze().memoizedState;return[e,t]},useMutableSource:wa,useSyncExternalStore:Sa,useId:Fa,unstable_isNewReconciler:!1};function sn(e,t){try{var n="",r=t;do n+=pf(r),r=r.return;while(r);var l=n}catch(i){l=` -Error generating stack: `+i.message+` -`+i.stack}return{value:e,source:t,stack:l,digest:null}}function Xl(e,t,n){return{value:e,source:null,stack:n??null,digest:t??null}}function zi(e,t){try{console.error(t.value)}catch(n){setTimeout(function(){throw n})}}var Ud=typeof WeakMap=="function"?WeakMap:Map;function Da(e,t,n){n=Ge(-1,n),n.tag=3,n.payload={element:null};var r=t.value;return n.callback=function(){rl||(rl=!0,Vi=r),zi(e,t)},n}function $a(e,t,n){n=Ge(-1,n),n.tag=3;var r=e.type.getDerivedStateFromError;if(typeof r=="function"){var l=t.value;n.payload=function(){return r(l)},n.callback=function(){zi(e,t)}}var i=e.stateNode;return i!==null&&typeof i.componentDidCatch=="function"&&(n.callback=function(){zi(e,t),typeof r!="function"&&(pt===null?pt=new Set([this]):pt.add(this));var o=t.stack;this.componentDidCatch(t.value,{componentStack:o!==null?o:""})}),n}function Iu(e,t,n){var r=e.pingCache;if(r===null){r=e.pingCache=new Ud;var l=new Set;r.set(t,l)}else l=r.get(t),l===void 0&&(l=new Set,r.set(t,l));l.has(n)||(l.add(n),e=ep.bind(null,e,t,n),t.then(e,e))}function Fu(e){do{var t;if((t=e.tag===13)&&(t=e.memoizedState,t=t!==null?t.dehydrated!==null:!0),t)return e;e=e.return}while(e!==null);return null}function Ru(e,t,n,r,l){return e.mode&1?(e.flags|=65536,e.lanes=l,e):(e===t?e.flags|=65536:(e.flags|=128,n.flags|=131072,n.flags&=-52805,n.tag===1&&(n.alternate===null?n.tag=17:(t=Ge(-1,1),t.tag=2,dt(n,t,1))),n.lanes|=1),e)}var Vd=et.ReactCurrentOwner,ye=!1;function fe(e,t,n,r){t.child=e===null?va(t,null,n,r):on(t,e.child,n,r)}function Au(e,t,n,r,l){n=n.render;var i=t.ref;return en(t,l),r=Co(e,t,n,r,i,l),n=jo(),e!==null&&!ye?(t.updateQueue=e.updateQueue,t.flags&=-2053,e.lanes&=~l,be(e,t,l)):(V&&n&&fo(t),t.flags|=1,fe(e,t,r,l),t.child)}function Mu(e,t,n,r,l){if(e===null){var i=n.type;return typeof i=="function"&&!Ro(i)&&i.defaultProps===void 0&&n.compare===null&&n.defaultProps===void 0?(t.tag=15,t.type=i,Ua(e,t,i,r,l)):(e=Ar(n.type,null,r,t,t.mode,l),e.ref=t.ref,e.return=t,t.child=e)}if(i=e.child,!(e.lanes&l)){var o=i.memoizedProps;if(n=n.compare,n=n!==null?n:Wn,n(o,r)&&e.ref===t.ref)return be(e,t,l)}return t.flags|=1,e=ht(i,r),e.ref=t.ref,e.return=t,t.child=e}function Ua(e,t,n,r,l){if(e!==null){var i=e.memoizedProps;if(Wn(i,r)&&e.ref===t.ref)if(ye=!1,t.pendingProps=r=i,(e.lanes&l)!==0)e.flags&131072&&(ye=!0);else return t.lanes=e.lanes,be(e,t,l)}return Li(e,t,n,r,l)}function Va(e,t,n){var r=t.pendingProps,l=r.children,i=e!==null?e.memoizedState:null;if(r.mode==="hidden")if(!(t.mode&1))t.memoizedState={baseLanes:0,cachePool:null,transitions:null},D(Gt,Se),Se|=n;else{if(!(n&1073741824))return e=i!==null?i.baseLanes|n:n,t.lanes=t.childLanes=1073741824,t.memoizedState={baseLanes:e,cachePool:null,transitions:null},t.updateQueue=null,D(Gt,Se),Se|=e,null;t.memoizedState={baseLanes:0,cachePool:null,transitions:null},r=i!==null?i.baseLanes:n,D(Gt,Se),Se|=r}else i!==null?(r=i.baseLanes|n,t.memoizedState=null):r=n,D(Gt,Se),Se|=r;return fe(e,t,l,n),t.child}function Ba(e,t){var n=t.ref;(e===null&&n!==null||e!==null&&e.ref!==n)&&(t.flags|=512,t.flags|=2097152)}function Li(e,t,n,r,l){var i=ge(n)?Pt:ce.current;return i=rn(t,i),en(t,l),n=Co(e,t,n,r,i,l),r=jo(),e!==null&&!ye?(t.updateQueue=e.updateQueue,t.flags&=-2053,e.lanes&=~l,be(e,t,l)):(V&&r&&fo(t),t.flags|=1,fe(e,t,n,l),t.child)}function Du(e,t,n,r,l){if(ge(n)){var i=!0;Yr(t)}else i=!1;if(en(t,l),t.stateNode===null)Ir(e,t),ha(t,n,r),Pi(t,n,r,l),r=!0;else if(e===null){var o=t.stateNode,u=t.memoizedProps;o.props=u;var s=o.context,c=n.contextType;typeof c=="object"&&c!==null?c=Pe(c):(c=ge(n)?Pt:ce.current,c=rn(t,c));var h=n.getDerivedStateFromProps,f=typeof h=="function"||typeof o.getSnapshotBeforeUpdate=="function";f||typeof o.UNSAFE_componentWillReceiveProps!="function"&&typeof o.componentWillReceiveProps!="function"||(u!==r||s!==c)&&Ou(t,o,r,c),rt=!1;var v=t.memoizedState;o.state=v,Jr(t,r,o,l),s=t.memoizedState,u!==r||v!==s||ve.current||rt?(typeof h=="function"&&(Oi(t,n,h,r),s=t.memoizedState),(u=rt||Tu(t,n,u,r,v,s,c))?(f||typeof o.UNSAFE_componentWillMount!="function"&&typeof o.componentWillMount!="function"||(typeof o.componentWillMount=="function"&&o.componentWillMount(),typeof o.UNSAFE_componentWillMount=="function"&&o.UNSAFE_componentWillMount()),typeof o.componentDidMount=="function"&&(t.flags|=4194308)):(typeof o.componentDidMount=="function"&&(t.flags|=4194308),t.memoizedProps=r,t.memoizedState=s),o.props=r,o.state=s,o.context=c,r=u):(typeof o.componentDidMount=="function"&&(t.flags|=4194308),r=!1)}else{o=t.stateNode,pa(e,t),u=t.memoizedProps,c=t.type===t.elementType?u:Fe(t.type,u),o.props=c,f=t.pendingProps,v=o.context,s=n.contextType,typeof s=="object"&&s!==null?s=Pe(s):(s=ge(n)?Pt:ce.current,s=rn(t,s));var g=n.getDerivedStateFromProps;(h=typeof g=="function"||typeof o.getSnapshotBeforeUpdate=="function")||typeof o.UNSAFE_componentWillReceiveProps!="function"&&typeof o.componentWillReceiveProps!="function"||(u!==f||v!==s)&&Ou(t,o,r,s),rt=!1,v=t.memoizedState,o.state=v,Jr(t,r,o,l);var w=t.memoizedState;u!==f||v!==w||ve.current||rt?(typeof g=="function"&&(Oi(t,n,g,r),w=t.memoizedState),(c=rt||Tu(t,n,c,r,v,w,s)||!1)?(h||typeof o.UNSAFE_componentWillUpdate!="function"&&typeof o.componentWillUpdate!="function"||(typeof o.componentWillUpdate=="function"&&o.componentWillUpdate(r,w,s),typeof o.UNSAFE_componentWillUpdate=="function"&&o.UNSAFE_componentWillUpdate(r,w,s)),typeof o.componentDidUpdate=="function"&&(t.flags|=4),typeof o.getSnapshotBeforeUpdate=="function"&&(t.flags|=1024)):(typeof o.componentDidUpdate!="function"||u===e.memoizedProps&&v===e.memoizedState||(t.flags|=4),typeof o.getSnapshotBeforeUpdate!="function"||u===e.memoizedProps&&v===e.memoizedState||(t.flags|=1024),t.memoizedProps=r,t.memoizedState=w),o.props=r,o.state=w,o.context=s,r=c):(typeof o.componentDidUpdate!="function"||u===e.memoizedProps&&v===e.memoizedState||(t.flags|=4),typeof o.getSnapshotBeforeUpdate!="function"||u===e.memoizedProps&&v===e.memoizedState||(t.flags|=1024),r=!1)}return Ii(e,t,n,r,i,l)}function Ii(e,t,n,r,l,i){Ba(e,t);var o=(t.flags&128)!==0;if(!r&&!o)return l&&Eu(t,n,!1),be(e,t,i);r=t.stateNode,Vd.current=t;var u=o&&typeof n.getDerivedStateFromError!="function"?null:r.render();return t.flags|=1,e!==null&&o?(t.child=on(t,e.child,null,i),t.child=on(t,null,u,i)):fe(e,t,u,i),t.memoizedState=r.state,l&&Eu(t,n,!0),t.child}function Qa(e){var t=e.stateNode;t.pendingContext?ku(e,t.pendingContext,t.pendingContext!==t.context):t.context&&ku(e,t.context,!1),So(e,t.containerInfo)}function $u(e,t,n,r,l){return ln(),mo(l),t.flags|=256,fe(e,t,n,r),t.child}var Fi={dehydrated:null,treeContext:null,retryLane:0};function Ri(e){return{baseLanes:e,cachePool:null,transitions:null}}function Ha(e,t,n){var r=t.pendingProps,l=B.current,i=!1,o=(t.flags&128)!==0,u;if((u=o)||(u=e!==null&&e.memoizedState===null?!1:(l&2)!==0),u?(i=!0,t.flags&=-129):(e===null||e.memoizedState!==null)&&(l|=1),D(B,l&1),e===null)return Ni(t),e=t.memoizedState,e!==null&&(e=e.dehydrated,e!==null)?(t.mode&1?e.data==="$!"?t.lanes=8:t.lanes=1073741824:t.lanes=1,null):(o=r.children,e=r.fallback,i?(r=t.mode,i=t.child,o={mode:"hidden",children:o},!(r&1)&&i!==null?(i.childLanes=0,i.pendingProps=o):i=gl(o,r,0,null),e=Ot(e,r,n,null),i.return=t,e.return=t,i.sibling=e,t.child=i,t.child.memoizedState=Ri(n),t.memoizedState=Fi,e):To(t,o));if(l=e.memoizedState,l!==null&&(u=l.dehydrated,u!==null))return Bd(e,t,o,r,u,l,n);if(i){i=r.fallback,o=t.mode,l=e.child,u=l.sibling;var s={mode:"hidden",children:r.children};return!(o&1)&&t.child!==l?(r=t.child,r.childLanes=0,r.pendingProps=s,t.deletions=null):(r=ht(l,s),r.subtreeFlags=l.subtreeFlags&14680064),u!==null?i=ht(u,i):(i=Ot(i,o,n,null),i.flags|=2),i.return=t,r.return=t,r.sibling=i,t.child=r,r=i,i=t.child,o=e.child.memoizedState,o=o===null?Ri(n):{baseLanes:o.baseLanes|n,cachePool:null,transitions:o.transitions},i.memoizedState=o,i.childLanes=e.childLanes&~n,t.memoizedState=Fi,r}return i=e.child,e=i.sibling,r=ht(i,{mode:"visible",children:r.children}),!(t.mode&1)&&(r.lanes=n),r.return=t,r.sibling=null,e!==null&&(n=t.deletions,n===null?(t.deletions=[e],t.flags|=16):n.push(e)),t.child=r,t.memoizedState=null,r}function To(e,t){return t=gl({mode:"visible",children:t},e.mode,0,null),t.return=e,e.child=t}function xr(e,t,n,r){return r!==null&&mo(r),on(t,e.child,null,n),e=To(t,t.pendingProps.children),e.flags|=2,t.memoizedState=null,e}function Bd(e,t,n,r,l,i,o){if(n)return t.flags&256?(t.flags&=-257,r=Xl(Error(x(422))),xr(e,t,o,r)):t.memoizedState!==null?(t.child=e.child,t.flags|=128,null):(i=r.fallback,l=t.mode,r=gl({mode:"visible",children:r.children},l,0,null),i=Ot(i,l,o,null),i.flags|=2,r.return=t,i.return=t,r.sibling=i,t.child=r,t.mode&1&&on(t,e.child,null,o),t.child.memoizedState=Ri(o),t.memoizedState=Fi,i);if(!(t.mode&1))return xr(e,t,o,null);if(l.data==="$!"){if(r=l.nextSibling&&l.nextSibling.dataset,r)var u=r.dgst;return r=u,i=Error(x(419)),r=Xl(i,r,void 0),xr(e,t,o,r)}if(u=(o&e.childLanes)!==0,ye||u){if(r=re,r!==null){switch(o&-o){case 4:l=2;break;case 16:l=8;break;case 64:case 128:case 256:case 512:case 1024:case 2048:case 4096:case 8192:case 16384:case 32768:case 65536:case 131072:case 262144:case 524288:case 1048576:case 2097152:case 4194304:case 8388608:case 16777216:case 33554432:case 67108864:l=32;break;case 536870912:l=268435456;break;default:l=0}l=l&(r.suspendedLanes|o)?0:l,l!==0&&l!==i.retryLane&&(i.retryLane=l,Je(e,l),De(r,e,l,-1))}return Fo(),r=Xl(Error(x(421))),xr(e,t,o,r)}return l.data==="$?"?(t.flags|=128,t.child=e.child,t=tp.bind(null,e),l._reactRetry=t,null):(e=i.treeContext,xe=ft(l.nextSibling),ke=t,V=!0,Ae=null,e!==null&&(_e[Ne++]=Ye,_e[Ne++]=Xe,_e[Ne++]=zt,Ye=e.id,Xe=e.overflow,zt=t),t=To(t,r.children),t.flags|=4096,t)}function Uu(e,t,n){e.lanes|=t;var r=e.alternate;r!==null&&(r.lanes|=t),Ti(e.return,t,n)}function Gl(e,t,n,r,l){var i=e.memoizedState;i===null?e.memoizedState={isBackwards:t,rendering:null,renderingStartTime:0,last:r,tail:n,tailMode:l}:(i.isBackwards=t,i.rendering=null,i.renderingStartTime=0,i.last=r,i.tail=n,i.tailMode=l)}function Wa(e,t,n){var r=t.pendingProps,l=r.revealOrder,i=r.tail;if(fe(e,t,r.children,n),r=B.current,r&2)r=r&1|2,t.flags|=128;else{if(e!==null&&e.flags&128)e:for(e=t.child;e!==null;){if(e.tag===13)e.memoizedState!==null&&Uu(e,n,t);else if(e.tag===19)Uu(e,n,t);else if(e.child!==null){e.child.return=e,e=e.child;continue}if(e===t)break e;for(;e.sibling===null;){if(e.return===null||e.return===t)break e;e=e.return}e.sibling.return=e.return,e=e.sibling}r&=1}if(D(B,r),!(t.mode&1))t.memoizedState=null;else switch(l){case"forwards":for(n=t.child,l=null;n!==null;)e=n.alternate,e!==null&&br(e)===null&&(l=n),n=n.sibling;n=l,n===null?(l=t.child,t.child=null):(l=n.sibling,n.sibling=null),Gl(t,!1,l,n,i);break;case"backwards":for(n=null,l=t.child,t.child=null;l!==null;){if(e=l.alternate,e!==null&&br(e)===null){t.child=l;break}e=l.sibling,l.sibling=n,n=l,l=e}Gl(t,!0,n,null,i);break;case"together":Gl(t,!1,null,null,void 0);break;default:t.memoizedState=null}return t.child}function Ir(e,t){!(t.mode&1)&&e!==null&&(e.alternate=null,t.alternate=null,t.flags|=2)}function be(e,t,n){if(e!==null&&(t.dependencies=e.dependencies),It|=t.lanes,!(n&t.childLanes))return null;if(e!==null&&t.child!==e.child)throw Error(x(153));if(t.child!==null){for(e=t.child,n=ht(e,e.pendingProps),t.child=n,n.return=t;e.sibling!==null;)e=e.sibling,n=n.sibling=ht(e,e.pendingProps),n.return=t;n.sibling=null}return t.child}function Qd(e,t,n){switch(t.tag){case 3:Qa(t),ln();break;case 5:ga(t);break;case 1:ge(t.type)&&Yr(t);break;case 4:So(t,t.stateNode.containerInfo);break;case 10:var r=t.type._context,l=t.memoizedProps.value;D(Zr,r._currentValue),r._currentValue=l;break;case 13:if(r=t.memoizedState,r!==null)return r.dehydrated!==null?(D(B,B.current&1),t.flags|=128,null):n&t.child.childLanes?Ha(e,t,n):(D(B,B.current&1),e=be(e,t,n),e!==null?e.sibling:null);D(B,B.current&1);break;case 19:if(r=(n&t.childLanes)!==0,e.flags&128){if(r)return Wa(e,t,n);t.flags|=128}if(l=t.memoizedState,l!==null&&(l.rendering=null,l.tail=null,l.lastEffect=null),D(B,B.current),r)break;return null;case 22:case 23:return t.lanes=0,Va(e,t,n)}return be(e,t,n)}var Ka,Ai,Ya,Xa;Ka=function(e,t){for(var n=t.child;n!==null;){if(n.tag===5||n.tag===6)e.appendChild(n.stateNode);else if(n.tag!==4&&n.child!==null){n.child.return=n,n=n.child;continue}if(n===t)break;for(;n.sibling===null;){if(n.return===null||n.return===t)return;n=n.return}n.sibling.return=n.return,n=n.sibling}};Ai=function(){};Ya=function(e,t,n,r){var l=e.memoizedProps;if(l!==r){e=t.stateNode,Nt(He.current);var i=null;switch(n){case"input":l=li(e,l),r=li(e,r),i=[];break;case"select":l=H({},l,{value:void 0}),r=H({},r,{value:void 0}),i=[];break;case"textarea":l=ui(e,l),r=ui(e,r),i=[];break;default:typeof l.onClick!="function"&&typeof r.onClick=="function"&&(e.onclick=Wr)}ai(n,r);var o;n=null;for(c in l)if(!r.hasOwnProperty(c)&&l.hasOwnProperty(c)&&l[c]!=null)if(c==="style"){var u=l[c];for(o in u)u.hasOwnProperty(o)&&(n||(n={}),n[o]="")}else c!=="dangerouslySetInnerHTML"&&c!=="children"&&c!=="suppressContentEditableWarning"&&c!=="suppressHydrationWarning"&&c!=="autoFocus"&&(Dn.hasOwnProperty(c)?i||(i=[]):(i=i||[]).push(c,null));for(c in r){var s=r[c];if(u=l!=null?l[c]:void 0,r.hasOwnProperty(c)&&s!==u&&(s!=null||u!=null))if(c==="style")if(u){for(o in u)!u.hasOwnProperty(o)||s&&s.hasOwnProperty(o)||(n||(n={}),n[o]="");for(o in s)s.hasOwnProperty(o)&&u[o]!==s[o]&&(n||(n={}),n[o]=s[o])}else n||(i||(i=[]),i.push(c,n)),n=s;else c==="dangerouslySetInnerHTML"?(s=s?s.__html:void 0,u=u?u.__html:void 0,s!=null&&u!==s&&(i=i||[]).push(c,s)):c==="children"?typeof s!="string"&&typeof s!="number"||(i=i||[]).push(c,""+s):c!=="suppressContentEditableWarning"&&c!=="suppressHydrationWarning"&&(Dn.hasOwnProperty(c)?(s!=null&&c==="onScroll"&&$("scroll",e),i||u===s||(i=[])):(i=i||[]).push(c,s))}n&&(i=i||[]).push("style",n);var c=i;(t.updateQueue=c)&&(t.flags|=4)}};Xa=function(e,t,n,r){n!==r&&(t.flags|=4)};function En(e,t){if(!V)switch(e.tailMode){case"hidden":t=e.tail;for(var n=null;t!==null;)t.alternate!==null&&(n=t),t=t.sibling;n===null?e.tail=null:n.sibling=null;break;case"collapsed":n=e.tail;for(var r=null;n!==null;)n.alternate!==null&&(r=n),n=n.sibling;r===null?t||e.tail===null?e.tail=null:e.tail.sibling=null:r.sibling=null}}function se(e){var t=e.alternate!==null&&e.alternate.child===e.child,n=0,r=0;if(t)for(var l=e.child;l!==null;)n|=l.lanes|l.childLanes,r|=l.subtreeFlags&14680064,r|=l.flags&14680064,l.return=e,l=l.sibling;else for(l=e.child;l!==null;)n|=l.lanes|l.childLanes,r|=l.subtreeFlags,r|=l.flags,l.return=e,l=l.sibling;return e.subtreeFlags|=r,e.childLanes=n,t}function Hd(e,t,n){var r=t.pendingProps;switch(po(t),t.tag){case 2:case 16:case 15:case 0:case 11:case 7:case 8:case 12:case 9:case 14:return se(t),null;case 1:return ge(t.type)&&Kr(),se(t),null;case 3:return r=t.stateNode,un(),U(ve),U(ce),ko(),r.pendingContext&&(r.context=r.pendingContext,r.pendingContext=null),(e===null||e.child===null)&&(wr(t)?t.flags|=4:e===null||e.memoizedState.isDehydrated&&!(t.flags&256)||(t.flags|=1024,Ae!==null&&(Hi(Ae),Ae=null))),Ai(e,t),se(t),null;case 5:xo(t);var l=Nt(Zn.current);if(n=t.type,e!==null&&t.stateNode!=null)Ya(e,t,n,r,l),e.ref!==t.ref&&(t.flags|=512,t.flags|=2097152);else{if(!r){if(t.stateNode===null)throw Error(x(166));return se(t),null}if(e=Nt(He.current),wr(t)){r=t.stateNode,n=t.type;var i=t.memoizedProps;switch(r[Be]=t,r[Xn]=i,e=(t.mode&1)!==0,n){case"dialog":$("cancel",r),$("close",r);break;case"iframe":case"object":case"embed":$("load",r);break;case"video":case"audio":for(l=0;l<Tn.length;l++)$(Tn[l],r);break;case"source":$("error",r);break;case"img":case"image":case"link":$("error",r),$("load",r);break;case"details":$("toggle",r);break;case"input":Go(r,i),$("invalid",r);break;case"select":r._wrapperState={wasMultiple:!!i.multiple},$("invalid",r);break;case"textarea":qo(r,i),$("invalid",r)}ai(n,i),l=null;for(var o in i)if(i.hasOwnProperty(o)){var u=i[o];o==="children"?typeof u=="string"?r.textContent!==u&&(i.suppressHydrationWarning!==!0&&gr(r.textContent,u,e),l=["children",u]):typeof u=="number"&&r.textContent!==""+u&&(i.suppressHydrationWarning!==!0&&gr(r.textContent,u,e),l=["children",""+u]):Dn.hasOwnProperty(o)&&u!=null&&o==="onScroll"&&$("scroll",r)}switch(n){case"input":cr(r),Zo(r,i,!0);break;case"textarea":cr(r),Jo(r);break;case"select":case"option":break;default:typeof i.onClick=="function"&&(r.onclick=Wr)}r=l,t.updateQueue=r,r!==null&&(t.flags|=4)}else{o=l.nodeType===9?l:l.ownerDocument,e==="http://www.w3.org/1999/xhtml"&&(e=xs(n)),e==="http://www.w3.org/1999/xhtml"?n==="script"?(e=o.createElement("div"),e.innerHTML="<script><\/script>",e=e.removeChild(e.firstChild)):typeof r.is=="string"?e=o.createElement(n,{is:r.is}):(e=o.createElement(n),n==="select"&&(o=e,r.multiple?o.multiple=!0:r.size&&(o.size=r.size))):e=o.createElementNS(e,n),e[Be]=t,e[Xn]=r,Ka(e,t,!1,!1),t.stateNode=e;e:{switch(o=ci(n,r),n){case"dialog":$("cancel",e),$("close",e),l=r;break;case"iframe":case"object":case"embed":$("load",e),l=r;break;case"video":case"audio":for(l=0;l<Tn.length;l++)$(Tn[l],e);l=r;break;case"source":$("error",e),l=r;break;case"img":case"image":case"link":$("error",e),$("load",e),l=r;break;case"details":$("toggle",e),l=r;break;case"input":Go(e,r),l=li(e,r),$("invalid",e);break;case"option":l=r;break;case"select":e._wrapperState={wasMultiple:!!r.multiple},l=H({},r,{value:void 0}),$("invalid",e);break;case"textarea":qo(e,r),l=ui(e,r),$("invalid",e);break;default:l=r}ai(n,l),u=l;for(i in u)if(u.hasOwnProperty(i)){var s=u[i];i==="style"?Cs(e,s):i==="dangerouslySetInnerHTML"?(s=s?s.__html:void 0,s!=null&&ks(e,s)):i==="children"?typeof s=="string"?(n!=="textarea"||s!=="")&&$n(e,s):typeof s=="number"&&$n(e,""+s):i!=="suppressContentEditableWarning"&&i!=="suppressHydrationWarning"&&i!=="autoFocus"&&(Dn.hasOwnProperty(i)?s!=null&&i==="onScroll"&&$("scroll",e):s!=null&&qi(e,i,s,o))}switch(n){case"input":cr(e),Zo(e,r,!1);break;case"textarea":cr(e),Jo(e);break;case"option":r.value!=null&&e.setAttribute("value",""+yt(r.value));break;case"select":e.multiple=!!r.multiple,i=r.value,i!=null?Zt(e,!!r.multiple,i,!1):r.defaultValue!=null&&Zt(e,!!r.multiple,r.defaultValue,!0);break;default:typeof l.onClick=="function"&&(e.onclick=Wr)}switch(n){case"button":case"input":case"select":case"textarea":r=!!r.autoFocus;break e;case"img":r=!0;break e;default:r=!1}}r&&(t.flags|=4)}t.ref!==null&&(t.flags|=512,t.flags|=2097152)}return se(t),null;case 6:if(e&&t.stateNode!=null)Xa(e,t,e.memoizedProps,r);else{if(typeof r!="string"&&t.stateNode===null)throw Error(x(166));if(n=Nt(Zn.current),Nt(He.current),wr(t)){if(r=t.stateNode,n=t.memoizedProps,r[Be]=t,(i=r.nodeValue!==n)&&(e=ke,e!==null))switch(e.tag){case 3:gr(r.nodeValue,n,(e.mode&1)!==0);break;case 5:e.memoizedProps.suppressHydrationWarning!==!0&&gr(r.nodeValue,n,(e.mode&1)!==0)}i&&(t.flags|=4)}else r=(n.nodeType===9?n:n.ownerDocument).createTextNode(r),r[Be]=t,t.stateNode=r}return se(t),null;case 13:if(U(B),r=t.memoizedState,e===null||e.memoizedState!==null&&e.memoizedState.dehydrated!==null){if(V&&xe!==null&&t.mode&1&&!(t.flags&128))fa(),ln(),t.flags|=98560,i=!1;else if(i=wr(t),r!==null&&r.dehydrated!==null){if(e===null){if(!i)throw Error(x(318));if(i=t.memoizedState,i=i!==null?i.dehydrated:null,!i)throw Error(x(317));i[Be]=t}else ln(),!(t.flags&128)&&(t.memoizedState=null),t.flags|=4;se(t),i=!1}else Ae!==null&&(Hi(Ae),Ae=null),i=!0;if(!i)return t.flags&65536?t:null}return t.flags&128?(t.lanes=n,t):(r=r!==null,r!==(e!==null&&e.memoizedState!==null)&&r&&(t.child.flags|=8192,t.mode&1&&(e===null||B.current&1?b===0&&(b=3):Fo())),t.updateQueue!==null&&(t.flags|=4),se(t),null);case 4:return un(),Ai(e,t),e===null&&Kn(t.stateNode.containerInfo),se(t),null;case 10:return vo(t.type._context),se(t),null;case 17:return ge(t.type)&&Kr(),se(t),null;case 19:if(U(B),i=t.memoizedState,i===null)return se(t),null;if(r=(t.flags&128)!==0,o=i.rendering,o===null)if(r)En(i,!1);else{if(b!==0||e!==null&&e.flags&128)for(e=t.child;e!==null;){if(o=br(e),o!==null){for(t.flags|=128,En(i,!1),r=o.updateQueue,r!==null&&(t.updateQueue=r,t.flags|=4),t.subtreeFlags=0,r=n,n=t.child;n!==null;)i=n,e=r,i.flags&=14680066,o=i.alternate,o===null?(i.childLanes=0,i.lanes=e,i.child=null,i.subtreeFlags=0,i.memoizedProps=null,i.memoizedState=null,i.updateQueue=null,i.dependencies=null,i.stateNode=null):(i.childLanes=o.childLanes,i.lanes=o.lanes,i.child=o.child,i.subtreeFlags=0,i.deletions=null,i.memoizedProps=o.memoizedProps,i.memoizedState=o.memoizedState,i.updateQueue=o.updateQueue,i.type=o.type,e=o.dependencies,i.dependencies=e===null?null:{lanes:e.lanes,firstContext:e.firstContext}),n=n.sibling;return D(B,B.current&1|2),t.child}e=e.sibling}i.tail!==null&&G()>an&&(t.flags|=128,r=!0,En(i,!1),t.lanes=4194304)}else{if(!r)if(e=br(o),e!==null){if(t.flags|=128,r=!0,n=e.updateQueue,n!==null&&(t.updateQueue=n,t.flags|=4),En(i,!0),i.tail===null&&i.tailMode==="hidden"&&!o.alternate&&!V)return se(t),null}else 2*G()-i.renderingStartTime>an&&n!==1073741824&&(t.flags|=128,r=!0,En(i,!1),t.lanes=4194304);i.isBackwards?(o.sibling=t.child,t.child=o):(n=i.last,n!==null?n.sibling=o:t.child=o,i.last=o)}return i.tail!==null?(t=i.tail,i.rendering=t,i.tail=t.sibling,i.renderingStartTime=G(),t.sibling=null,n=B.current,D(B,r?n&1|2:n&1),t):(se(t),null);case 22:case 23:return Io(),r=t.memoizedState!==null,e!==null&&e.memoizedState!==null!==r&&(t.flags|=8192),r&&t.mode&1?Se&1073741824&&(se(t),t.subtreeFlags&6&&(t.flags|=8192)):se(t),null;case 24:return null;case 25:return null}throw Error(x(156,t.tag))}function Wd(e,t){switch(po(t),t.tag){case 1:return ge(t.type)&&Kr(),e=t.flags,e&65536?(t.flags=e&-65537|128,t):null;case 3:return un(),U(ve),U(ce),ko(),e=t.flags,e&65536&&!(e&128)?(t.flags=e&-65537|128,t):null;case 5:return xo(t),null;case 13:if(U(B),e=t.memoizedState,e!==null&&e.dehydrated!==null){if(t.alternate===null)throw Error(x(340));ln()}return e=t.flags,e&65536?(t.flags=e&-65537|128,t):null;case 19:return U(B),null;case 4:return un(),null;case 10:return vo(t.type._context),null;case 22:case 23:return Io(),null;case 24:return null;default:return null}}var kr=!1,ae=!1,Kd=typeof WeakSet=="function"?WeakSet:Set,E=null;function Xt(e,t){var n=e.ref;if(n!==null)if(typeof n=="function")try{n(null)}catch(r){K(e,t,r)}else n.current=null}function Mi(e,t,n){try{n()}catch(r){K(e,t,r)}}var Vu=!1;function Yd(e,t){if(Si=Br,e=Js(),co(e)){if("selectionStart"in e)var n={start:e.selectionStart,end:e.selectionEnd};else e:{n=(n=e.ownerDocument)&&n.defaultView||window;var r=n.getSelection&&n.getSelection();if(r&&r.rangeCount!==0){n=r.anchorNode;var l=r.anchorOffset,i=r.focusNode;r=r.focusOffset;try{n.nodeType,i.nodeType}catch{n=null;break e}var o=0,u=-1,s=-1,c=0,h=0,f=e,v=null;t:for(;;){for(var g;f!==n||l!==0&&f.nodeType!==3||(u=o+l),f!==i||r!==0&&f.nodeType!==3||(s=o+r),f.nodeType===3&&(o+=f.nodeValue.length),(g=f.firstChild)!==null;)v=f,f=g;for(;;){if(f===e)break t;if(v===n&&++c===l&&(u=o),v===i&&++h===r&&(s=o),(g=f.nextSibling)!==null)break;f=v,v=f.parentNode}f=g}n=u===-1||s===-1?null:{start:u,end:s}}else n=null}n=n||{start:0,end:0}}else n=null;for(xi={focusedElem:e,selectionRange:n},Br=!1,E=t;E!==null;)if(t=E,e=t.child,(t.subtreeFlags&1028)!==0&&e!==null)e.return=t,E=e;else for(;E!==null;){t=E;try{var w=t.alternate;if(t.flags&1024)switch(t.tag){case 0:case 11:case 15:break;case 1:if(w!==null){var k=w.memoizedProps,M=w.memoizedState,p=t.stateNode,d=p.getSnapshotBeforeUpdate(t.elementType===t.type?k:Fe(t.type,k),M);p.__reactInternalSnapshotBeforeUpdate=d}break;case 3:var y=t.stateNode.containerInfo;y.nodeType===1?y.textContent="":y.nodeType===9&&y.documentElement&&y.removeChild(y.documentElement);break;case 5:case 6:case 4:case 17:break;default:throw Error(x(163))}}catch(S){K(t,t.return,S)}if(e=t.sibling,e!==null){e.return=t.return,E=e;break}E=t.return}return w=Vu,Vu=!1,w}function Rn(e,t,n){var r=t.updateQueue;if(r=r!==null?r.lastEffect:null,r!==null){var l=r=r.next;do{if((l.tag&e)===e){var i=l.destroy;l.destroy=void 0,i!==void 0&&Mi(t,n,i)}l=l.next}while(l!==r)}}function yl(e,t){if(t=t.updateQueue,t=t!==null?t.lastEffect:null,t!==null){var n=t=t.next;do{if((n.tag&e)===e){var r=n.create;n.destroy=r()}n=n.next}while(n!==t)}}function Di(e){var t=e.ref;if(t!==null){var n=e.stateNode;switch(e.tag){case 5:e=n;break;default:e=n}typeof t=="function"?t(e):t.current=e}}function Ga(e){var t=e.alternate;t!==null&&(e.alternate=null,Ga(t)),e.child=null,e.deletions=null,e.sibling=null,e.tag===5&&(t=e.stateNode,t!==null&&(delete t[Be],delete t[Xn],delete t[Ci],delete t[Od],delete t[Pd])),e.stateNode=null,e.return=null,e.dependencies=null,e.memoizedProps=null,e.memoizedState=null,e.pendingProps=null,e.stateNode=null,e.updateQueue=null}function Za(e){return e.tag===5||e.tag===3||e.tag===4}function Bu(e){e:for(;;){for(;e.sibling===null;){if(e.return===null||Za(e.return))return null;e=e.return}for(e.sibling.return=e.return,e=e.sibling;e.tag!==5&&e.tag!==6&&e.tag!==18;){if(e.flags&2||e.child===null||e.tag===4)continue e;e.child.return=e,e=e.child}if(!(e.flags&2))return e.stateNode}}function $i(e,t,n){var r=e.tag;if(r===5||r===6)e=e.stateNode,t?n.nodeType===8?n.parentNode.insertBefore(e,t):n.insertBefore(e,t):(n.nodeType===8?(t=n.parentNode,t.insertBefore(e,n)):(t=n,t.appendChild(e)),n=n._reactRootContainer,n!=null||t.onclick!==null||(t.onclick=Wr));else if(r!==4&&(e=e.child,e!==null))for($i(e,t,n),e=e.sibling;e!==null;)$i(e,t,n),e=e.sibling}function Ui(e,t,n){var r=e.tag;if(r===5||r===6)e=e.stateNode,t?n.insertBefore(e,t):n.appendChild(e);else if(r!==4&&(e=e.child,e!==null))for(Ui(e,t,n),e=e.sibling;e!==null;)Ui(e,t,n),e=e.sibling}var le=null,Re=!1;function tt(e,t,n){for(n=n.child;n!==null;)qa(e,t,n),n=n.sibling}function qa(e,t,n){if(Qe&&typeof Qe.onCommitFiberUnmount=="function")try{Qe.onCommitFiberUnmount(sl,n)}catch{}switch(n.tag){case 5:ae||Xt(n,t);case 6:var r=le,l=Re;le=null,tt(e,t,n),le=r,Re=l,le!==null&&(Re?(e=le,n=n.stateNode,e.nodeType===8?e.parentNode.removeChild(n):e.removeChild(n)):le.removeChild(n.stateNode));break;case 18:le!==null&&(Re?(e=le,n=n.stateNode,e.nodeType===8?Bl(e.parentNode,n):e.nodeType===1&&Bl(e,n),Qn(e)):Bl(le,n.stateNode));break;case 4:r=le,l=Re,le=n.stateNode.containerInfo,Re=!0,tt(e,t,n),le=r,Re=l;break;case 0:case 11:case 14:case 15:if(!ae&&(r=n.updateQueue,r!==null&&(r=r.lastEffect,r!==null))){l=r=r.next;do{var i=l,o=i.destroy;i=i.tag,o!==void 0&&(i&2||i&4)&&Mi(n,t,o),l=l.next}while(l!==r)}tt(e,t,n);break;case 1:if(!ae&&(Xt(n,t),r=n.stateNode,typeof r.componentWillUnmount=="function"))try{r.props=n.memoizedProps,r.state=n.memoizedState,r.componentWillUnmount()}catch(u){K(n,t,u)}tt(e,t,n);break;case 21:tt(e,t,n);break;case 22:n.mode&1?(ae=(r=ae)||n.memoizedState!==null,tt(e,t,n),ae=r):tt(e,t,n);break;default:tt(e,t,n)}}function Qu(e){var t=e.updateQueue;if(t!==null){e.updateQueue=null;var n=e.stateNode;n===null&&(n=e.stateNode=new Kd),t.forEach(function(r){var l=np.bind(null,e,r);n.has(r)||(n.add(r),r.then(l,l))})}}function Ie(e,t){var n=t.deletions;if(n!==null)for(var r=0;r<n.length;r++){var l=n[r];try{var i=e,o=t,u=o;e:for(;u!==null;){switch(u.tag){case 5:le=u.stateNode,Re=!1;break e;case 3:le=u.stateNode.containerInfo,Re=!0;break e;case 4:le=u.stateNode.containerInfo,Re=!0;break e}u=u.return}if(le===null)throw Error(x(160));qa(i,o,l),le=null,Re=!1;var s=l.alternate;s!==null&&(s.return=null),l.return=null}catch(c){K(l,t,c)}}if(t.subtreeFlags&12854)for(t=t.child;t!==null;)Ja(t,e),t=t.sibling}function Ja(e,t){var n=e.alternate,r=e.flags;switch(e.tag){case 0:case 11:case 14:case 15:if(Ie(t,e),Ue(e),r&4){try{Rn(3,e,e.return),yl(3,e)}catch(k){K(e,e.return,k)}try{Rn(5,e,e.return)}catch(k){K(e,e.return,k)}}break;case 1:Ie(t,e),Ue(e),r&512&&n!==null&&Xt(n,n.return);break;case 5:if(Ie(t,e),Ue(e),r&512&&n!==null&&Xt(n,n.return),e.flags&32){var l=e.stateNode;try{$n(l,"")}catch(k){K(e,e.return,k)}}if(r&4&&(l=e.stateNode,l!=null)){var i=e.memoizedProps,o=n!==null?n.memoizedProps:i,u=e.type,s=e.updateQueue;if(e.updateQueue=null,s!==null)try{u==="input"&&i.type==="radio"&&i.name!=null&&ws(l,i),ci(u,o);var c=ci(u,i);for(o=0;o<s.length;o+=2){var h=s[o],f=s[o+1];h==="style"?Cs(l,f):h==="dangerouslySetInnerHTML"?ks(l,f):h==="children"?$n(l,f):qi(l,h,f,c)}switch(u){case"input":ii(l,i);break;case"textarea":Ss(l,i);break;case"select":var v=l._wrapperState.wasMultiple;l._wrapperState.wasMultiple=!!i.multiple;var g=i.value;g!=null?Zt(l,!!i.multiple,g,!1):v!==!!i.multiple&&(i.defaultValue!=null?Zt(l,!!i.multiple,i.defaultValue,!0):Zt(l,!!i.multiple,i.multiple?[]:"",!1))}l[Xn]=i}catch(k){K(e,e.return,k)}}break;case 6:if(Ie(t,e),Ue(e),r&4){if(e.stateNode===null)throw Error(x(162));l=e.stateNode,i=e.memoizedProps;try{l.nodeValue=i}catch(k){K(e,e.return,k)}}break;case 3:if(Ie(t,e),Ue(e),r&4&&n!==null&&n.memoizedState.isDehydrated)try{Qn(t.containerInfo)}catch(k){K(e,e.return,k)}break;case 4:Ie(t,e),Ue(e);break;case 13:Ie(t,e),Ue(e),l=e.child,l.flags&8192&&(i=l.memoizedState!==null,l.stateNode.isHidden=i,!i||l.alternate!==null&&l.alternate.memoizedState!==null||(zo=G())),r&4&&Qu(e);break;case 22:if(h=n!==null&&n.memoizedState!==null,e.mode&1?(ae=(c=ae)||h,Ie(t,e),ae=c):Ie(t,e),Ue(e),r&8192){if(c=e.memoizedState!==null,(e.stateNode.isHidden=c)&&!h&&e.mode&1)for(E=e,h=e.child;h!==null;){for(f=E=h;E!==null;){switch(v=E,g=v.child,v.tag){case 0:case 11:case 14:case 15:Rn(4,v,v.return);break;case 1:Xt(v,v.return);var w=v.stateNode;if(typeof w.componentWillUnmount=="function"){r=v,n=v.return;try{t=r,w.props=t.memoizedProps,w.state=t.memoizedState,w.componentWillUnmount()}catch(k){K(r,n,k)}}break;case 5:Xt(v,v.return);break;case 22:if(v.memoizedState!==null){Wu(f);continue}}g!==null?(g.return=v,E=g):Wu(f)}h=h.sibling}e:for(h=null,f=e;;){if(f.tag===5){if(h===null){h=f;try{l=f.stateNode,c?(i=l.style,typeof i.setProperty=="function"?i.setProperty("display","none","important"):i.display="none"):(u=f.stateNode,s=f.memoizedProps.style,o=s!=null&&s.hasOwnProperty("display")?s.display:null,u.style.display=Es("display",o))}catch(k){K(e,e.return,k)}}}else if(f.tag===6){if(h===null)try{f.stateNode.nodeValue=c?"":f.memoizedProps}catch(k){K(e,e.return,k)}}else if((f.tag!==22&&f.tag!==23||f.memoizedState===null||f===e)&&f.child!==null){f.child.return=f,f=f.child;continue}if(f===e)break e;for(;f.sibling===null;){if(f.return===null||f.return===e)break e;h===f&&(h=null),f=f.return}h===f&&(h=null),f.sibling.return=f.return,f=f.sibling}}break;case 19:Ie(t,e),Ue(e),r&4&&Qu(e);break;case 21:break;default:Ie(t,e),Ue(e)}}function Ue(e){var t=e.flags;if(t&2){try{e:{for(var n=e.return;n!==null;){if(Za(n)){var r=n;break e}n=n.return}throw Error(x(160))}switch(r.tag){case 5:var l=r.stateNode;r.flags&32&&($n(l,""),r.flags&=-33);var i=Bu(e);Ui(e,i,l);break;case 3:case 4:var o=r.stateNode.containerInfo,u=Bu(e);$i(e,u,o);break;default:throw Error(x(161))}}catch(s){K(e,e.return,s)}e.flags&=-3}t&4096&&(e.flags&=-4097)}function Xd(e,t,n){E=e,ba(e)}function ba(e,t,n){for(var r=(e.mode&1)!==0;E!==null;){var l=E,i=l.child;if(l.tag===22&&r){var o=l.memoizedState!==null||kr;if(!o){var u=l.alternate,s=u!==null&&u.memoizedState!==null||ae;u=kr;var c=ae;if(kr=o,(ae=s)&&!c)for(E=l;E!==null;)o=E,s=o.child,o.tag===22&&o.memoizedState!==null?Ku(l):s!==null?(s.return=o,E=s):Ku(l);for(;i!==null;)E=i,ba(i),i=i.sibling;E=l,kr=u,ae=c}Hu(e)}else l.subtreeFlags&8772&&i!==null?(i.return=l,E=i):Hu(e)}}function Hu(e){for(;E!==null;){var t=E;if(t.flags&8772){var n=t.alternate;try{if(t.flags&8772)switch(t.tag){case 0:case 11:case 15:ae||yl(5,t);break;case 1:var r=t.stateNode;if(t.flags&4&&!ae)if(n===null)r.componentDidMount();else{var l=t.elementType===t.type?n.memoizedProps:Fe(t.type,n.memoizedProps);r.componentDidUpdate(l,n.memoizedState,r.__reactInternalSnapshotBeforeUpdate)}var i=t.updateQueue;i!==null&&Nu(t,i,r);break;case 3:var o=t.updateQueue;if(o!==null){if(n=null,t.child!==null)switch(t.child.tag){case 5:n=t.child.stateNode;break;case 1:n=t.child.stateNode}Nu(t,o,n)}break;case 5:var u=t.stateNode;if(n===null&&t.flags&4){n=u;var s=t.memoizedProps;switch(t.type){case"button":case"input":case"select":case"textarea":s.autoFocus&&n.focus();break;case"img":s.src&&(n.src=s.src)}}break;case 6:break;case 4:break;case 12:break;case 13:if(t.memoizedState===null){var c=t.alternate;if(c!==null){var h=c.memoizedState;if(h!==null){var f=h.dehydrated;f!==null&&Qn(f)}}}break;case 19:case 17:case 21:case 22:case 23:case 25:break;default:throw Error(x(163))}ae||t.flags&512&&Di(t)}catch(v){K(t,t.return,v)}}if(t===e){E=null;break}if(n=t.sibling,n!==null){n.return=t.return,E=n;break}E=t.return}}function Wu(e){for(;E!==null;){var t=E;if(t===e){E=null;break}var n=t.sibling;if(n!==null){n.return=t.return,E=n;break}E=t.return}}function Ku(e){for(;E!==null;){var t=E;try{switch(t.tag){case 0:case 11:case 15:var n=t.return;try{yl(4,t)}catch(s){K(t,n,s)}break;case 1:var r=t.stateNode;if(typeof r.componentDidMount=="function"){var l=t.return;try{r.componentDidMount()}catch(s){K(t,l,s)}}var i=t.return;try{Di(t)}catch(s){K(t,i,s)}break;case 5:var o=t.return;try{Di(t)}catch(s){K(t,o,s)}}}catch(s){K(t,t.return,s)}if(t===e){E=null;break}var u=t.sibling;if(u!==null){u.return=t.return,E=u;break}E=t.return}}var Gd=Math.ceil,nl=et.ReactCurrentDispatcher,Oo=et.ReactCurrentOwner,Oe=et.ReactCurrentBatchConfig,R=0,re=null,q=null,ie=0,Se=0,Gt=wt(0),b=0,er=null,It=0,vl=0,Po=0,An=null,he=null,zo=0,an=1/0,We=null,rl=!1,Vi=null,pt=null,Er=!1,ut=null,ll=0,Mn=0,Bi=null,Fr=-1,Rr=0;function de(){return R&6?G():Fr!==-1?Fr:Fr=G()}function mt(e){return e.mode&1?R&2&&ie!==0?ie&-ie:Ld.transition!==null?(Rr===0&&(Rr=As()),Rr):(e=A,e!==0||(e=window.event,e=e===void 0?16:Qs(e.type)),e):1}function De(e,t,n,r){if(50<Mn)throw Mn=0,Bi=null,Error(x(185));nr(e,n,r),(!(R&2)||e!==re)&&(e===re&&(!(R&2)&&(vl|=n),b===4&&it(e,ie)),we(e,r),n===1&&R===0&&!(t.mode&1)&&(an=G()+500,pl&&St()))}function we(e,t){var n=e.callbackNode;Lf(e,t);var r=Vr(e,e===re?ie:0);if(r===0)n!==null&&tu(n),e.callbackNode=null,e.callbackPriority=0;else if(t=r&-r,e.callbackPriority!==t){if(n!=null&&tu(n),t===1)e.tag===0?zd(Yu.bind(null,e)):sa(Yu.bind(null,e)),Nd(function(){!(R&6)&&St()}),n=null;else{switch(Ms(r)){case 1:n=no;break;case 4:n=Fs;break;case 16:n=Ur;break;case 536870912:n=Rs;break;default:n=Ur}n=uc(n,ec.bind(null,e))}e.callbackPriority=t,e.callbackNode=n}}function ec(e,t){if(Fr=-1,Rr=0,R&6)throw Error(x(327));var n=e.callbackNode;if(tn()&&e.callbackNode!==n)return null;var r=Vr(e,e===re?ie:0);if(r===0)return null;if(r&30||r&e.expiredLanes||t)t=il(e,r);else{t=r;var l=R;R|=2;var i=nc();(re!==e||ie!==t)&&(We=null,an=G()+500,Tt(e,t));do try{Jd();break}catch(u){tc(e,u)}while(1);yo(),nl.current=i,R=l,q!==null?t=0:(re=null,ie=0,t=b)}if(t!==0){if(t===2&&(l=hi(e),l!==0&&(r=l,t=Qi(e,l))),t===1)throw n=er,Tt(e,0),it(e,r),we(e,G()),n;if(t===6)it(e,r);else{if(l=e.current.alternate,!(r&30)&&!Zd(l)&&(t=il(e,r),t===2&&(i=hi(e),i!==0&&(r=i,t=Qi(e,i))),t===1))throw n=er,Tt(e,0),it(e,r),we(e,G()),n;switch(e.finishedWork=l,e.finishedLanes=r,t){case 0:case 1:throw Error(x(345));case 2:Ct(e,he,We);break;case 3:if(it(e,r),(r&130023424)===r&&(t=zo+500-G(),10<t)){if(Vr(e,0)!==0)break;if(l=e.suspendedLanes,(l&r)!==r){de(),e.pingedLanes|=e.suspendedLanes&l;break}e.timeoutHandle=Ei(Ct.bind(null,e,he,We),t);break}Ct(e,he,We);break;case 4:if(it(e,r),(r&4194240)===r)break;for(t=e.eventTimes,l=-1;0<r;){var o=31-Me(r);i=1<<o,o=t[o],o>l&&(l=o),r&=~i}if(r=l,r=G()-r,r=(120>r?120:480>r?480:1080>r?1080:1920>r?1920:3e3>r?3e3:4320>r?4320:1960*Gd(r/1960))-r,10<r){e.timeoutHandle=Ei(Ct.bind(null,e,he,We),r);break}Ct(e,he,We);break;case 5:Ct(e,he,We);break;default:throw Error(x(329))}}}return we(e,G()),e.callbackNode===n?ec.bind(null,e):null}function Qi(e,t){var n=An;return e.current.memoizedState.isDehydrated&&(Tt(e,t).flags|=256),e=il(e,t),e!==2&&(t=he,he=n,t!==null&&Hi(t)),e}function Hi(e){he===null?he=e:he.push.apply(he,e)}function Zd(e){for(var t=e;;){if(t.flags&16384){var n=t.updateQueue;if(n!==null&&(n=n.stores,n!==null))for(var r=0;r<n.length;r++){var l=n[r],i=l.getSnapshot;l=l.value;try{if(!$e(i(),l))return!1}catch{return!1}}}if(n=t.child,t.subtreeFlags&16384&&n!==null)n.return=t,t=n;else{if(t===e)break;for(;t.sibling===null;){if(t.return===null||t.return===e)return!0;t=t.return}t.sibling.return=t.return,t=t.sibling}}return!0}function it(e,t){for(t&=~Po,t&=~vl,e.suspendedLanes|=t,e.pingedLanes&=~t,e=e.expirationTimes;0<t;){var n=31-Me(t),r=1<<n;e[n]=-1,t&=~r}}function Yu(e){if(R&6)throw Error(x(327));tn();var t=Vr(e,0);if(!(t&1))return we(e,G()),null;var n=il(e,t);if(e.tag!==0&&n===2){var r=hi(e);r!==0&&(t=r,n=Qi(e,r))}if(n===1)throw n=er,Tt(e,0),it(e,t),we(e,G()),n;if(n===6)throw Error(x(345));return e.finishedWork=e.current.alternate,e.finishedLanes=t,Ct(e,he,We),we(e,G()),null}function Lo(e,t){var n=R;R|=1;try{return e(t)}finally{R=n,R===0&&(an=G()+500,pl&&St())}}function Ft(e){ut!==null&&ut.tag===0&&!(R&6)&&tn();var t=R;R|=1;var n=Oe.transition,r=A;try{if(Oe.transition=null,A=1,e)return e()}finally{A=r,Oe.transition=n,R=t,!(R&6)&&St()}}function Io(){Se=Gt.current,U(Gt)}function Tt(e,t){e.finishedWork=null,e.finishedLanes=0;var n=e.timeoutHandle;if(n!==-1&&(e.timeoutHandle=-1,_d(n)),q!==null)for(n=q.return;n!==null;){var r=n;switch(po(r),r.tag){case 1:r=r.type.childContextTypes,r!=null&&Kr();break;case 3:un(),U(ve),U(ce),ko();break;case 5:xo(r);break;case 4:un();break;case 13:U(B);break;case 19:U(B);break;case 10:vo(r.type._context);break;case 22:case 23:Io()}n=n.return}if(re=e,q=e=ht(e.current,null),ie=Se=t,b=0,er=null,Po=vl=It=0,he=An=null,_t!==null){for(t=0;t<_t.length;t++)if(n=_t[t],r=n.interleaved,r!==null){n.interleaved=null;var l=r.next,i=n.pending;if(i!==null){var o=i.next;i.next=l,r.next=o}n.pending=r}_t=null}return e}function tc(e,t){do{var n=q;try{if(yo(),zr.current=tl,el){for(var r=Q.memoizedState;r!==null;){var l=r.queue;l!==null&&(l.pending=null),r=r.next}el=!1}if(Lt=0,ne=J=Q=null,Fn=!1,qn=0,Oo.current=null,n===null||n.return===null){b=1,er=t,q=null;break}e:{var i=e,o=n.return,u=n,s=t;if(t=ie,u.flags|=32768,s!==null&&typeof s=="object"&&typeof s.then=="function"){var c=s,h=u,f=h.tag;if(!(h.mode&1)&&(f===0||f===11||f===15)){var v=h.alternate;v?(h.updateQueue=v.updateQueue,h.memoizedState=v.memoizedState,h.lanes=v.lanes):(h.updateQueue=null,h.memoizedState=null)}var g=Fu(o);if(g!==null){g.flags&=-257,Ru(g,o,u,i,t),g.mode&1&&Iu(i,c,t),t=g,s=c;var w=t.updateQueue;if(w===null){var k=new Set;k.add(s),t.updateQueue=k}else w.add(s);break e}else{if(!(t&1)){Iu(i,c,t),Fo();break e}s=Error(x(426))}}else if(V&&u.mode&1){var M=Fu(o);if(M!==null){!(M.flags&65536)&&(M.flags|=256),Ru(M,o,u,i,t),mo(sn(s,u));break e}}i=s=sn(s,u),b!==4&&(b=2),An===null?An=[i]:An.push(i),i=o;do{switch(i.tag){case 3:i.flags|=65536,t&=-t,i.lanes|=t;var p=Da(i,s,t);_u(i,p);break e;case 1:u=s;var d=i.type,y=i.stateNode;if(!(i.flags&128)&&(typeof d.getDerivedStateFromError=="function"||y!==null&&typeof y.componentDidCatch=="function"&&(pt===null||!pt.has(y)))){i.flags|=65536,t&=-t,i.lanes|=t;var S=$a(i,u,t);_u(i,S);break e}}i=i.return}while(i!==null)}lc(n)}catch(C){t=C,q===n&&n!==null&&(q=n=n.return);continue}break}while(1)}function nc(){var e=nl.current;return nl.current=tl,e===null?tl:e}function Fo(){(b===0||b===3||b===2)&&(b=4),re===null||!(It&268435455)&&!(vl&268435455)||it(re,ie)}function il(e,t){var n=R;R|=2;var r=nc();(re!==e||ie!==t)&&(We=null,Tt(e,t));do try{qd();break}catch(l){tc(e,l)}while(1);if(yo(),R=n,nl.current=r,q!==null)throw Error(x(261));return re=null,ie=0,b}function qd(){for(;q!==null;)rc(q)}function Jd(){for(;q!==null&&!Ef();)rc(q)}function rc(e){var t=oc(e.alternate,e,Se);e.memoizedProps=e.pendingProps,t===null?lc(e):q=t,Oo.current=null}function lc(e){var t=e;do{var n=t.alternate;if(e=t.return,t.flags&32768){if(n=Wd(n,t),n!==null){n.flags&=32767,q=n;return}if(e!==null)e.flags|=32768,e.subtreeFlags=0,e.deletions=null;else{b=6,q=null;return}}else if(n=Hd(n,t,Se),n!==null){q=n;return}if(t=t.sibling,t!==null){q=t;return}q=t=e}while(t!==null);b===0&&(b=5)}function Ct(e,t,n){var r=A,l=Oe.transition;try{Oe.transition=null,A=1,bd(e,t,n,r)}finally{Oe.transition=l,A=r}return null}function bd(e,t,n,r){do tn();while(ut!==null);if(R&6)throw Error(x(327));n=e.finishedWork;var l=e.finishedLanes;if(n===null)return null;if(e.finishedWork=null,e.finishedLanes=0,n===e.current)throw Error(x(177));e.callbackNode=null,e.callbackPriority=0;var i=n.lanes|n.childLanes;if(If(e,i),e===re&&(q=re=null,ie=0),!(n.subtreeFlags&2064)&&!(n.flags&2064)||Er||(Er=!0,uc(Ur,function(){return tn(),null})),i=(n.flags&15990)!==0,n.subtreeFlags&15990||i){i=Oe.transition,Oe.transition=null;var o=A;A=1;var u=R;R|=4,Oo.current=null,Yd(e,n),Ja(n,e),wd(xi),Br=!!Si,xi=Si=null,e.current=n,Xd(n),Cf(),R=u,A=o,Oe.transition=i}else e.current=n;if(Er&&(Er=!1,ut=e,ll=l),i=e.pendingLanes,i===0&&(pt=null),Nf(n.stateNode),we(e,G()),t!==null)for(r=e.onRecoverableError,n=0;n<t.length;n++)l=t[n],r(l.value,{componentStack:l.stack,digest:l.digest});if(rl)throw rl=!1,e=Vi,Vi=null,e;return ll&1&&e.tag!==0&&tn(),i=e.pendingLanes,i&1?e===Bi?Mn++:(Mn=0,Bi=e):Mn=0,St(),null}function tn(){if(ut!==null){var e=Ms(ll),t=Oe.transition,n=A;try{if(Oe.transition=null,A=16>e?16:e,ut===null)var r=!1;else{if(e=ut,ut=null,ll=0,R&6)throw Error(x(331));var l=R;for(R|=4,E=e.current;E!==null;){var i=E,o=i.child;if(E.flags&16){var u=i.deletions;if(u!==null){for(var s=0;s<u.length;s++){var c=u[s];for(E=c;E!==null;){var h=E;switch(h.tag){case 0:case 11:case 15:Rn(8,h,i)}var f=h.child;if(f!==null)f.return=h,E=f;else for(;E!==null;){h=E;var v=h.sibling,g=h.return;if(Ga(h),h===c){E=null;break}if(v!==null){v.return=g,E=v;break}E=g}}}var w=i.alternate;if(w!==null){var k=w.child;if(k!==null){w.child=null;do{var M=k.sibling;k.sibling=null,k=M}while(k!==null)}}E=i}}if(i.subtreeFlags&2064&&o!==null)o.return=i,E=o;else e:for(;E!==null;){if(i=E,i.flags&2048)switch(i.tag){case 0:case 11:case 15:Rn(9,i,i.return)}var p=i.sibling;if(p!==null){p.return=i.return,E=p;break e}E=i.return}}var d=e.current;for(E=d;E!==null;){o=E;var y=o.child;if(o.subtreeFlags&2064&&y!==null)y.return=o,E=y;else e:for(o=d;E!==null;){if(u=E,u.flags&2048)try{switch(u.tag){case 0:case 11:case 15:yl(9,u)}}catch(C){K(u,u.return,C)}if(u===o){E=null;break e}var S=u.sibling;if(S!==null){S.return=u.return,E=S;break e}E=u.return}}if(R=l,St(),Qe&&typeof Qe.onPostCommitFiberRoot=="function")try{Qe.onPostCommitFiberRoot(sl,e)}catch{}r=!0}return r}finally{A=n,Oe.transition=t}}return!1}function Xu(e,t,n){t=sn(n,t),t=Da(e,t,1),e=dt(e,t,1),t=de(),e!==null&&(nr(e,1,t),we(e,t))}function K(e,t,n){if(e.tag===3)Xu(e,e,n);else for(;t!==null;){if(t.tag===3){Xu(t,e,n);break}else if(t.tag===1){var r=t.stateNode;if(typeof t.type.getDerivedStateFromError=="function"||typeof r.componentDidCatch=="function"&&(pt===null||!pt.has(r))){e=sn(n,e),e=$a(t,e,1),t=dt(t,e,1),e=de(),t!==null&&(nr(t,1,e),we(t,e));break}}t=t.return}}function ep(e,t,n){var r=e.pingCache;r!==null&&r.delete(t),t=de(),e.pingedLanes|=e.suspendedLanes&n,re===e&&(ie&n)===n&&(b===4||b===3&&(ie&130023424)===ie&&500>G()-zo?Tt(e,0):Po|=n),we(e,t)}function ic(e,t){t===0&&(e.mode&1?(t=pr,pr<<=1,!(pr&130023424)&&(pr=4194304)):t=1);var n=de();e=Je(e,t),e!==null&&(nr(e,t,n),we(e,n))}function tp(e){var t=e.memoizedState,n=0;t!==null&&(n=t.retryLane),ic(e,n)}function np(e,t){var n=0;switch(e.tag){case 13:var r=e.stateNode,l=e.memoizedState;l!==null&&(n=l.retryLane);break;case 19:r=e.stateNode;break;default:throw Error(x(314))}r!==null&&r.delete(t),ic(e,n)}var oc;oc=function(e,t,n){if(e!==null)if(e.memoizedProps!==t.pendingProps||ve.current)ye=!0;else{if(!(e.lanes&n)&&!(t.flags&128))return ye=!1,Qd(e,t,n);ye=!!(e.flags&131072)}else ye=!1,V&&t.flags&1048576&&aa(t,Gr,t.index);switch(t.lanes=0,t.tag){case 2:var r=t.type;Ir(e,t),e=t.pendingProps;var l=rn(t,ce.current);en(t,n),l=Co(null,t,r,e,l,n);var i=jo();return t.flags|=1,typeof l=="object"&&l!==null&&typeof l.render=="function"&&l.$$typeof===void 0?(t.tag=1,t.memoizedState=null,t.updateQueue=null,ge(r)?(i=!0,Yr(t)):i=!1,t.memoizedState=l.state!==null&&l.state!==void 0?l.state:null,wo(t),l.updater=ml,t.stateNode=l,l._reactInternals=t,Pi(t,r,e,n),t=Ii(null,t,r,!0,i,n)):(t.tag=0,V&&i&&fo(t),fe(null,t,l,n),t=t.child),t;case 16:r=t.elementType;e:{switch(Ir(e,t),e=t.pendingProps,l=r._init,r=l(r._payload),t.type=r,l=t.tag=lp(r),e=Fe(r,e),l){case 0:t=Li(null,t,r,e,n);break e;case 1:t=Du(null,t,r,e,n);break e;case 11:t=Au(null,t,r,e,n);break e;case 14:t=Mu(null,t,r,Fe(r.type,e),n);break e}throw Error(x(306,r,""))}return t;case 0:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Fe(r,l),Li(e,t,r,l,n);case 1:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Fe(r,l),Du(e,t,r,l,n);case 3:e:{if(Qa(t),e===null)throw Error(x(387));r=t.pendingProps,i=t.memoizedState,l=i.element,pa(e,t),Jr(t,r,null,n);var o=t.memoizedState;if(r=o.element,i.isDehydrated)if(i={element:r,isDehydrated:!1,cache:o.cache,pendingSuspenseBoundaries:o.pendingSuspenseBoundaries,transitions:o.transitions},t.updateQueue.baseState=i,t.memoizedState=i,t.flags&256){l=sn(Error(x(423)),t),t=$u(e,t,r,n,l);break e}else if(r!==l){l=sn(Error(x(424)),t),t=$u(e,t,r,n,l);break e}else for(xe=ft(t.stateNode.containerInfo.firstChild),ke=t,V=!0,Ae=null,n=va(t,null,r,n),t.child=n;n;)n.flags=n.flags&-3|4096,n=n.sibling;else{if(ln(),r===l){t=be(e,t,n);break e}fe(e,t,r,n)}t=t.child}return t;case 5:return ga(t),e===null&&Ni(t),r=t.type,l=t.pendingProps,i=e!==null?e.memoizedProps:null,o=l.children,ki(r,l)?o=null:i!==null&&ki(r,i)&&(t.flags|=32),Ba(e,t),fe(e,t,o,n),t.child;case 6:return e===null&&Ni(t),null;case 13:return Ha(e,t,n);case 4:return So(t,t.stateNode.containerInfo),r=t.pendingProps,e===null?t.child=on(t,null,r,n):fe(e,t,r,n),t.child;case 11:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Fe(r,l),Au(e,t,r,l,n);case 7:return fe(e,t,t.pendingProps,n),t.child;case 8:return fe(e,t,t.pendingProps.children,n),t.child;case 12:return fe(e,t,t.pendingProps.children,n),t.child;case 10:e:{if(r=t.type._context,l=t.pendingProps,i=t.memoizedProps,o=l.value,D(Zr,r._currentValue),r._currentValue=o,i!==null)if($e(i.value,o)){if(i.children===l.children&&!ve.current){t=be(e,t,n);break e}}else for(i=t.child,i!==null&&(i.return=t);i!==null;){var u=i.dependencies;if(u!==null){o=i.child;for(var s=u.firstContext;s!==null;){if(s.context===r){if(i.tag===1){s=Ge(-1,n&-n),s.tag=2;var c=i.updateQueue;if(c!==null){c=c.shared;var h=c.pending;h===null?s.next=s:(s.next=h.next,h.next=s),c.pending=s}}i.lanes|=n,s=i.alternate,s!==null&&(s.lanes|=n),Ti(i.return,n,t),u.lanes|=n;break}s=s.next}}else if(i.tag===10)o=i.type===t.type?null:i.child;else if(i.tag===18){if(o=i.return,o===null)throw Error(x(341));o.lanes|=n,u=o.alternate,u!==null&&(u.lanes|=n),Ti(o,n,t),o=i.sibling}else o=i.child;if(o!==null)o.return=i;else for(o=i;o!==null;){if(o===t){o=null;break}if(i=o.sibling,i!==null){i.return=o.return,o=i;break}o=o.return}i=o}fe(e,t,l.children,n),t=t.child}return t;case 9:return l=t.type,r=t.pendingProps.children,en(t,n),l=Pe(l),r=r(l),t.flags|=1,fe(e,t,r,n),t.child;case 14:return r=t.type,l=Fe(r,t.pendingProps),l=Fe(r.type,l),Mu(e,t,r,l,n);case 15:return Ua(e,t,t.type,t.pendingProps,n);case 17:return r=t.type,l=t.pendingProps,l=t.elementType===r?l:Fe(r,l),Ir(e,t),t.tag=1,ge(r)?(e=!0,Yr(t)):e=!1,en(t,n),ha(t,r,l),Pi(t,r,l,n),Ii(null,t,r,!0,e,n);case 19:return Wa(e,t,n);case 22:return Va(e,t,n)}throw Error(x(156,t.tag))};function uc(e,t){return Is(e,t)}function rp(e,t,n,r){this.tag=e,this.key=n,this.sibling=this.child=this.return=this.stateNode=this.type=this.elementType=null,this.index=0,this.ref=null,this.pendingProps=t,this.dependencies=this.memoizedState=this.updateQueue=this.memoizedProps=null,this.mode=r,this.subtreeFlags=this.flags=0,this.deletions=null,this.childLanes=this.lanes=0,this.alternate=null}function Te(e,t,n,r){return new rp(e,t,n,r)}function Ro(e){return e=e.prototype,!(!e||!e.isReactComponent)}function lp(e){if(typeof e=="function")return Ro(e)?1:0;if(e!=null){if(e=e.$$typeof,e===bi)return 11;if(e===eo)return 14}return 2}function ht(e,t){var n=e.alternate;return n===null?(n=Te(e.tag,t,e.key,e.mode),n.elementType=e.elementType,n.type=e.type,n.stateNode=e.stateNode,n.alternate=e,e.alternate=n):(n.pendingProps=t,n.type=e.type,n.flags=0,n.subtreeFlags=0,n.deletions=null),n.flags=e.flags&14680064,n.childLanes=e.childLanes,n.lanes=e.lanes,n.child=e.child,n.memoizedProps=e.memoizedProps,n.memoizedState=e.memoizedState,n.updateQueue=e.updateQueue,t=e.dependencies,n.dependencies=t===null?null:{lanes:t.lanes,firstContext:t.firstContext},n.sibling=e.sibling,n.index=e.index,n.ref=e.ref,n}function Ar(e,t,n,r,l,i){var o=2;if(r=e,typeof e=="function")Ro(e)&&(o=1);else if(typeof e=="string")o=5;else e:switch(e){case $t:return Ot(n.children,l,i,t);case Ji:o=8,l|=8;break;case ei:return e=Te(12,n,t,l|2),e.elementType=ei,e.lanes=i,e;case ti:return e=Te(13,n,t,l),e.elementType=ti,e.lanes=i,e;case ni:return e=Te(19,n,t,l),e.elementType=ni,e.lanes=i,e;case ys:return gl(n,l,i,t);default:if(typeof e=="object"&&e!==null)switch(e.$$typeof){case ms:o=10;break e;case hs:o=9;break e;case bi:o=11;break e;case eo:o=14;break e;case nt:o=16,r=null;break e}throw Error(x(130,e==null?e:typeof e,""))}return t=Te(o,n,t,l),t.elementType=e,t.type=r,t.lanes=i,t}function Ot(e,t,n,r){return e=Te(7,e,r,t),e.lanes=n,e}function gl(e,t,n,r){return e=Te(22,e,r,t),e.elementType=ys,e.lanes=n,e.stateNode={isHidden:!1},e}function Zl(e,t,n){return e=Te(6,e,null,t),e.lanes=n,e}function ql(e,t,n){return t=Te(4,e.children!==null?e.children:[],e.key,t),t.lanes=n,t.stateNode={containerInfo:e.containerInfo,pendingChildren:null,implementation:e.implementation},t}function ip(e,t,n,r,l){this.tag=t,this.containerInfo=e,this.finishedWork=this.pingCache=this.current=this.pendingChildren=null,this.timeoutHandle=-1,this.callbackNode=this.pendingContext=this.context=null,this.callbackPriority=0,this.eventTimes=zl(0),this.expirationTimes=zl(-1),this.entangledLanes=this.finishedLanes=this.mutableReadLanes=this.expiredLanes=this.pingedLanes=this.suspendedLanes=this.pendingLanes=0,this.entanglements=zl(0),this.identifierPrefix=r,this.onRecoverableError=l,this.mutableSourceEagerHydrationData=null}function Ao(e,t,n,r,l,i,o,u,s){return e=new ip(e,t,n,u,s),t===1?(t=1,i===!0&&(t|=8)):t=0,i=Te(3,null,null,t),e.current=i,i.stateNode=e,i.memoizedState={element:r,isDehydrated:n,cache:null,transitions:null,pendingSuspenseBoundaries:null},wo(i),e}function op(e,t,n){var r=3<arguments.length&&arguments[3]!==void 0?arguments[3]:null;return{$$typeof:Dt,key:r==null?null:""+r,children:e,containerInfo:t,implementation:n}}function sc(e){if(!e)return vt;e=e._reactInternals;e:{if(At(e)!==e||e.tag!==1)throw Error(x(170));var t=e;do{switch(t.tag){case 3:t=t.stateNode.context;break e;case 1:if(ge(t.type)){t=t.stateNode.__reactInternalMemoizedMergedChildContext;break e}}t=t.return}while(t!==null);throw Error(x(171))}if(e.tag===1){var n=e.type;if(ge(n))return ua(e,n,t)}return t}function ac(e,t,n,r,l,i,o,u,s){return e=Ao(n,r,!0,e,l,i,o,u,s),e.context=sc(null),n=e.current,r=de(),l=mt(n),i=Ge(r,l),i.callback=t??null,dt(n,i,l),e.current.lanes=l,nr(e,l,r),we(e,r),e}function wl(e,t,n,r){var l=t.current,i=de(),o=mt(l);return n=sc(n),t.context===null?t.context=n:t.pendingContext=n,t=Ge(i,o),t.payload={element:e},r=r===void 0?null:r,r!==null&&(t.callback=r),e=dt(l,t,o),e!==null&&(De(e,l,o,i),Pr(e,l,o)),o}function ol(e){if(e=e.current,!e.child)return null;switch(e.child.tag){case 5:return e.child.stateNode;default:return e.child.stateNode}}function Gu(e,t){if(e=e.memoizedState,e!==null&&e.dehydrated!==null){var n=e.retryLane;e.retryLane=n!==0&&n<t?n:t}}function Mo(e,t){Gu(e,t),(e=e.alternate)&&Gu(e,t)}function up(){return null}var cc=typeof reportError=="function"?reportError:function(e){console.error(e)};function Do(e){this._internalRoot=e}Sl.prototype.render=Do.prototype.render=function(e){var t=this._internalRoot;if(t===null)throw Error(x(409));wl(e,t,null,null)};Sl.prototype.unmount=Do.prototype.unmount=function(){var e=this._internalRoot;if(e!==null){this._internalRoot=null;var t=e.containerInfo;Ft(function(){wl(null,e,null,null)}),t[qe]=null}};function Sl(e){this._internalRoot=e}Sl.prototype.unstable_scheduleHydration=function(e){if(e){var t=Us();e={blockedOn:null,target:e,priority:t};for(var n=0;n<lt.length&&t!==0&&t<lt[n].priority;n++);lt.splice(n,0,e),n===0&&Bs(e)}};function $o(e){return!(!e||e.nodeType!==1&&e.nodeType!==9&&e.nodeType!==11)}function xl(e){return!(!e||e.nodeType!==1&&e.nodeType!==9&&e.nodeType!==11&&(e.nodeType!==8||e.nodeValue!==" react-mount-point-unstable "))}function Zu(){}function sp(e,t,n,r,l){if(l){if(typeof r=="function"){var i=r;r=function(){var c=ol(o);i.call(c)}}var o=ac(t,r,e,0,null,!1,!1,"",Zu);return e._reactRootContainer=o,e[qe]=o.current,Kn(e.nodeType===8?e.parentNode:e),Ft(),o}for(;l=e.lastChild;)e.removeChild(l);if(typeof r=="function"){var u=r;r=function(){var c=ol(s);u.call(c)}}var s=Ao(e,0,!1,null,null,!1,!1,"",Zu);return e._reactRootContainer=s,e[qe]=s.current,Kn(e.nodeType===8?e.parentNode:e),Ft(function(){wl(t,s,n,r)}),s}function kl(e,t,n,r,l){var i=n._reactRootContainer;if(i){var o=i;if(typeof l=="function"){var u=l;l=function(){var s=ol(o);u.call(s)}}wl(t,o,e,l)}else o=sp(n,t,e,l,r);return ol(o)}Ds=function(e){switch(e.tag){case 3:var t=e.stateNode;if(t.current.memoizedState.isDehydrated){var n=Nn(t.pendingLanes);n!==0&&(ro(t,n|1),we(t,G()),!(R&6)&&(an=G()+500,St()))}break;case 13:Ft(function(){var r=Je(e,1);if(r!==null){var l=de();De(r,e,1,l)}}),Mo(e,1)}};lo=function(e){if(e.tag===13){var t=Je(e,134217728);if(t!==null){var n=de();De(t,e,134217728,n)}Mo(e,134217728)}};$s=function(e){if(e.tag===13){var t=mt(e),n=Je(e,t);if(n!==null){var r=de();De(n,e,t,r)}Mo(e,t)}};Us=function(){return A};Vs=function(e,t){var n=A;try{return A=e,t()}finally{A=n}};di=function(e,t,n){switch(t){case"input":if(ii(e,n),t=n.name,n.type==="radio"&&t!=null){for(n=e;n.parentNode;)n=n.parentNode;for(n=n.querySelectorAll("input[name="+JSON.stringify(""+t)+'][type="radio"]'),t=0;t<n.length;t++){var r=n[t];if(r!==e&&r.form===e.form){var l=dl(r);if(!l)throw Error(x(90));gs(r),ii(r,l)}}}break;case"textarea":Ss(e,n);break;case"select":t=n.value,t!=null&&Zt(e,!!n.multiple,t,!1)}};Ns=Lo;Ts=Ft;var ap={usingClientEntryPoint:!1,Events:[lr,Qt,dl,js,_s,Lo]},Cn={findFiberByHostInstance:jt,bundleType:0,version:"18.2.0",rendererPackageName:"react-dom"},cp={bundleType:Cn.bundleType,version:Cn.version,rendererPackageName:Cn.rendererPackageName,rendererConfig:Cn.rendererConfig,overrideHookState:null,overrideHookStateDeletePath:null,overrideHookStateRenamePath:null,overrideProps:null,overridePropsDeletePath:null,overridePropsRenamePath:null,setErrorHandler:null,setSuspenseHandler:null,scheduleUpdate:null,currentDispatcherRef:et.ReactCurrentDispatcher,findHostInstanceByFiber:function(e){return e=zs(e),e===null?null:e.stateNode},findFiberByHostInstance:Cn.findFiberByHostInstance||up,findHostInstancesForRefresh:null,scheduleRefresh:null,scheduleRoot:null,setRefreshHandler:null,getCurrentFiber:null,reconcilerVersion:"18.2.0-next-9e3b772b8-20220608"};if(typeof __REACT_DEVTOOLS_GLOBAL_HOOK__<"u"){var Cr=__REACT_DEVTOOLS_GLOBAL_HOOK__;if(!Cr.isDisabled&&Cr.supportsFiber)try{sl=Cr.inject(cp),Qe=Cr}catch{}}Ce.__SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED=ap;Ce.createPortal=function(e,t){var n=2<arguments.length&&arguments[2]!==void 0?arguments[2]:null;if(!$o(t))throw Error(x(200));return op(e,t,null,n)};Ce.createRoot=function(e,t){if(!$o(e))throw Error(x(299));var n=!1,r="",l=cc;return t!=null&&(t.unstable_strictMode===!0&&(n=!0),t.identifierPrefix!==void 0&&(r=t.identifierPrefix),t.onRecoverableError!==void 0&&(l=t.onRecoverableError)),t=Ao(e,1,!1,null,null,n,!1,r,l),e[qe]=t.current,Kn(e.nodeType===8?e.parentNode:e),new Do(t)};Ce.findDOMNode=function(e){if(e==null)return null;if(e.nodeType===1)return e;var t=e._reactInternals;if(t===void 0)throw typeof e.render=="function"?Error(x(188)):(e=Object.keys(e).join(","),Error(x(268,e)));return e=zs(t),e=e===null?null:e.stateNode,e};Ce.flushSync=function(e){return Ft(e)};Ce.hydrate=function(e,t,n){if(!xl(t))throw Error(x(200));return kl(null,e,t,!0,n)};Ce.hydrateRoot=function(e,t,n){if(!$o(e))throw Error(x(405));var r=n!=null&&n.hydratedSources||null,l=!1,i="",o=cc;if(n!=null&&(n.unstable_strictMode===!0&&(l=!0),n.identifierPrefix!==void 0&&(i=n.identifierPrefix),n.onRecoverableError!==void 0&&(o=n.onRecoverableError)),t=ac(t,null,e,1,n??null,l,!1,i,o),e[qe]=t.current,Kn(e),r)for(e=0;e<r.length;e++)n=r[e],l=n._getVersion,l=l(n._source),t.mutableSourceEagerHydrationData==null?t.mutableSourceEagerHydrationData=[n,l]:t.mutableSourceEagerHydrationData.push(n,l);return new Sl(t)};Ce.render=function(e,t,n){if(!xl(t))throw Error(x(200));return kl(null,e,t,!1,n)};Ce.unmountComponentAtNode=function(e){if(!xl(e))throw Error(x(40));return e._reactRootContainer?(Ft(function(){kl(null,null,e,!1,function(){e._reactRootContainer=null,e[qe]=null})}),!0):!1};Ce.unstable_batchedUpdates=Lo;Ce.unstable_renderSubtreeIntoContainer=function(e,t,n,r){if(!xl(n))throw Error(x(200));if(e==null||e._reactInternals===void 0)throw Error(x(38));return kl(e,t,n,!1,r)};Ce.version="18.2.0-next-9e3b772b8-20220608";function fc(){if(!(typeof __REACT_DEVTOOLS_GLOBAL_HOOK__>"u"||typeof __REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE!="function"))try{__REACT_DEVTOOLS_GLOBAL_HOOK__.checkDCE(fc)}catch(e){console.error(e)}}fc(),as.exports=Ce;var fp=as.exports,dc,qu=fp;dc=qu.createRoot,qu.hydrateRoot;var dp=(typeof process<"u","https://huggingface.co");async function pp(e,t){var r;const n=new mp(e.url,e.status,e.headers.get("X-Request-Id")??(t==null?void 0:t.requestId));if(n.message=`Api error with status ${n.statusCode}.${t!=null&&t.message?` ${t.message}.`:""} Request ID: ${n.requestId}, url: ${n.url}`,(r=e.headers.get("Content-Type"))!=null&&r.startsWith("application/json")){const l=await e.json();n.message=l.error||l.message||n.message,n.data=l}else n.data={message:await e.text()};throw n}var mp=class extends Error{constructor(t,n,r,l){super(l);yn(this,"statusCode");yn(this,"url");yn(this,"requestId");yn(this,"data");this.statusCode=n,this.requestId=r,this.url=t}};function hp(e){if(!(!e||e.accessToken===void 0||e.accessToken===null)&&!e.accessToken.startsWith("hf_"))throw new TypeError("Your access token must start with 'hf_'")}function yp(e){const t=/<(https?:[/][/][^>]+)>;\s+rel="([^"]+)"/g;return Object.fromEntries([...e.matchAll(t)].map(([,n,r])=>[r,n]))}var vp=["pipeline_tag","private","gated","downloads","likes"];async function*gp(e){var r,l;hp(e==null?void 0:e.credentials);const t=new URLSearchParams([...Object.entries({limit:"500",...(r=e==null?void 0:e.search)!=null&&r.owner?{author:e.search.owner}:void 0,...(l=e==null?void 0:e.search)!=null&&l.task?{pipeline_tag:e.search.task}:void 0}),...vp.map(i=>["expand",i])]).toString();let n=`${(e==null?void 0:e.hubUrl)||dp}/api/models?${t}`;for(;n;){const i=await fetch(n,{headers:{accept:"application/json",...e!=null&&e.credentials?{Authorization:`Bearer ${e.credentials.accessToken}`}:void 0}});if(!i.ok)throw pp(i);const o=await i.json();for(const s of o)yield{id:s._id,name:s.id,private:s.private,task:s.pipeline_tag,downloads:s.downloads,gated:s.gated,likes:s.likes,updatedAt:new Date(s.lastModified)};const u=i.headers.get("Link");n=u?yp(u).next:void 0}}var wp=Object.defineProperty,Sp=(e,t)=>{for(var n in t)wp(e,n,{get:t[n],enumerable:!0})},xp={};Sp(xp,{audioClassification:()=>mc,automaticSpeechRecognition:()=>hc,conversational:()=>xc,documentQuestionAnswering:()=>Fc,featureExtraction:()=>kc,fillMask:()=>Ec,imageClassification:()=>yc,imageSegmentation:()=>vc,imageToText:()=>gc,objectDetection:()=>wc,questionAnswering:()=>Cc,request:()=>W,sentenceSimilarity:()=>jc,streamingRequest:()=>Uo,summarization:()=>_c,tableQuestionAnswering:()=>Nc,textClassification:()=>Tc,textGeneration:()=>Oc,textGenerationStream:()=>_p,textToImage:()=>Sc,tokenClassification:()=>Pc,translation:()=>zc,visualQuestionAnswering:()=>Rc,zeroShotClassification:()=>Lc});var kp="https://api-inference.huggingface.co/models/";function pc(e,t){const{model:n,accessToken:r,...l}=e,i={};r&&(i.Authorization=`Bearer ${r}`);const o="data"in e&&!!e.data;o?(t!=null&&t.wait_for_model&&(i["X-Wait-For-Model"]="true"),(t==null?void 0:t.use_cache)===!1&&(i["X-Use-Cache"]="false"),t!=null&&t.dont_load_model&&(i["X-Load-Model"]="0")):i["Content-Type"]="application/json";const u=/^http(s?):/.test(n)||n.startsWith("/")?n:`${kp}${n}`,s={headers:i,method:"POST",body:o?e.data:JSON.stringify({...l,options:t}),credentials:t!=null&&t.includeCredentials?"include":"same-origin"};return{url:u,info:s}}async function W(e,t){var i,o;const{url:n,info:r}=pc(e,t),l=await((t==null?void 0:t.fetch)??fetch)(n,r);if((t==null?void 0:t.retry_on_error)!==!1&&l.status===503&&!(t!=null&&t.wait_for_model))return W(e,{...t,wait_for_model:!0});if(!l.ok){if((i=l.headers.get("Content-Type"))!=null&&i.startsWith("application/json")){const u=await l.json();if(u.error)throw new Error(u.error)}throw new Error("An error occurred while fetching the blob")}return(o=l.headers.get("Content-Type"))!=null&&o.startsWith("application/json")?await l.json():await l.blob()}function Ep(e){let t,n,r,l=!1;return function(o){t===void 0?(t=o,n=0,r=-1):t=jp(t,o);const u=t.length;let s=0;for(;n<u;){l&&(t[n]===10&&(s=++n),l=!1);let c=-1;for(;n<u&&c===-1;++n)switch(t[n]){case 58:r===-1&&(r=n-s);break;case 13:l=!0;case 10:c=n;break}if(c===-1)break;e(t.subarray(s,c),r),s=n,r=-1}s===u?t=void 0:s!==0&&(t=t.subarray(s),n-=s)}}function Cp(e,t,n){let r=Ju();const l=new TextDecoder;return function(o,u){if(o.length===0)n==null||n(r),r=Ju();else if(u>0){const s=l.decode(o.subarray(0,u)),c=u+(o[u+1]===32?2:1),h=l.decode(o.subarray(c));switch(s){case"data":r.data=r.data?r.data+` -`+h:h;break;case"event":r.event=h;break;case"id":e(r.id=h);break;case"retry":const f=parseInt(h,10);isNaN(f)||t(r.retry=f);break}}}}function jp(e,t){const n=new Uint8Array(e.length+t.length);return n.set(e),n.set(t,e.length),n}function Ju(){return{data:"",event:"",id:"",retry:void 0}}async function*Uo(e,t){var c;const{url:n,info:r}=pc({...e,stream:!0},t),l=await((t==null?void 0:t.fetch)??fetch)(n,r);if((t==null?void 0:t.retry_on_error)!==!1&&l.status===503&&!(t!=null&&t.wait_for_model))return Uo(e,{...t,wait_for_model:!0});if(!l.ok){if((c=l.headers.get("Content-Type"))!=null&&c.startsWith("application/json")){const h=await l.json();if(h.error)throw new Error(h.error)}throw new Error(`Server response contains error: ${l.status}`)}if(l.headers.get("content-type")!=="text/event-stream")throw new Error("Server does not support event stream content type, it returned "+l.headers.get("content-type"));if(!l.body)return;const i=l.body.getReader();let o=[];const s=Ep(Cp(()=>{},()=>{},h=>{o.push(h)}));try{for(;;){const{done:h,value:f}=await i.read();if(h)return;s(f);for(const v of o)v.data.length>0&&(yield JSON.parse(v.data));o=[]}}finally{i.releaseLock()}}var Z=class extends TypeError{constructor(e){super(`Invalid inference output: ${e}. Use the 'request' method with the same parameters to do a custom call with no type checking.`),this.name="InferenceOutputError"}};async function mc(e,t){const n=await W(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof l.label=="string"&&typeof l.score=="number")))throw new Z("Expected Array<{label: string, score: number}>");return n}async function hc(e,t){const n=await W(e,t);if(!(typeof(n==null?void 0:n.text)=="string"))throw new Z("Expected {text: string}");return n}async function yc(e,t){const n=await W(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof l.label=="string"&&typeof l.score=="number")))throw new Z("Expected Array<{label: string, score: number}>");return n}async function vc(e,t){const n=await W(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof l.label=="string"&&typeof l.mask=="string"&&typeof l.score=="number")))throw new Z("Expected Array<{label: string, mask: string, score: number}>");return n}async function gc(e,t){var r;const n=(r=await W(e,t))==null?void 0:r[0];if(typeof(n==null?void 0:n.generated_text)!="string")throw new Z("Expected {generated_text: string}");return n}async function wc(e,t){const n=await W(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof l.label=="string"&&typeof l.score=="number"&&typeof l.box.xmin=="number"&&typeof l.box.ymin=="number"&&typeof l.box.xmax=="number"&&typeof l.box.ymax=="number")))throw new Z("Expected Array<{label:string; score:number; box:{xmin:number; ymin:number; xmax:number; ymax:number}}>");return n}async function Sc(e,t){const n=await W(e,t);if(!(n&&n instanceof Blob))throw new Z("Expected Blob");return n}async function xc(e,t){const n=await W(e,t);if(!(Array.isArray(n.conversation.generated_responses)&&n.conversation.generated_responses.every(l=>typeof l=="string")&&Array.isArray(n.conversation.past_user_inputs)&&n.conversation.past_user_inputs.every(l=>typeof l=="string")&&typeof n.generated_text=="string"&&Array.isArray(n.warnings)&&n.warnings.every(l=>typeof l=="string")))throw new Z("Expected {conversation: {generated_responses: string[], past_user_inputs: string[]}, generated_text: string, warnings: string[]}");return n}async function kc(e,t){const n=await W(e,t);let r=!0;if(Array.isArray(n)){for(const l of n)if(Array.isArray(l)){if(r=l.every(i=>typeof i=="number"),!r)break}else if(typeof l!="number"){r=!1;break}}else r=!1;if(!r)throw new Z("Expected Array<number[] | number>");return n}async function Ec(e,t){const n=await W(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof l.score=="number"&&typeof l.sequence=="string"&&typeof l.token=="number"&&typeof l.token_str=="string")))throw new Z("Expected Array<{score: number, sequence: string, token: number, token_str: string}>");return n}async function Cc(e,t){const n=await W(e,t);if(!(typeof n=="object"&&!!n&&typeof n.answer=="string"&&typeof n.end=="number"&&typeof n.score=="number"&&typeof n.start=="number"))throw new Z("Expected {answer: string, end: number, score: number, start: number}");return n}async function jc(e,t){const n=await W(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof l=="number")))throw new Z("Expected number[]");return n}async function _c(e,t){const n=await W(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof(l==null?void 0:l.summary_text)=="string")))throw new Z("Expected Array<{summary_text: string}>");return n==null?void 0:n[0]}async function Nc(e,t){const n=await W(e,t);if(!(typeof(n==null?void 0:n.aggregator)=="string"&&typeof n.answer=="string"&&Array.isArray(n.cells)&&n.cells.every(l=>typeof l=="string")&&Array.isArray(n.coordinates)&&n.coordinates.every(l=>Array.isArray(l)&&l.every(i=>typeof i=="number"))))throw new Z("Expected {aggregator: string, answer: string, cells: string[], coordinates: number[][]}");return n}async function Tc(e,t){var l;const n=(l=await W(e,t))==null?void 0:l[0];if(!(Array.isArray(n)&&n.every(i=>typeof(i==null?void 0:i.label)=="string"&&typeof i.score=="number")))throw new Z("Expected Array<{label: string, score: number}>");return n}async function Oc(e,t){const n=await W(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof(l==null?void 0:l.generated_text)=="string")))throw new Z("Expected Array<{generated_text: string}>");return n==null?void 0:n[0]}async function*_p(e,t){yield*Uo(e,t)}function Vo(e){return Array.isArray(e)?e:[e]}async function Pc(e,t){const n=Vo(await W(e,t));if(!(Array.isArray(n)&&n.every(l=>typeof l.end=="number"&&typeof l.entity_group=="string"&&typeof l.score=="number"&&typeof l.start=="number"&&typeof l.word=="string")))throw new Z("Expected Array<{end: number, entity_group: string, score: number, start: number, word: string}>");return n}async function zc(e,t){const n=await W(e,t);if(!(Array.isArray(n)&&n.every(l=>typeof(l==null?void 0:l.translation_text)=="string")))throw new Z("Expected type Array<{translation_text: string}>");return n==null?void 0:n[0]}async function Lc(e,t){const n=Vo(await W(e,t));if(!(Array.isArray(n)&&n.every(l=>Array.isArray(l.labels)&&l.labels.every(i=>typeof i=="string")&&Array.isArray(l.scores)&&l.scores.every(i=>typeof i=="number")&&typeof l.sequence=="string")))throw new Z("Expected Array<{labels: string[], scores: number[], sequence: string}>");return n}function Ic(e){if(globalThis.Buffer)return globalThis.Buffer.from(e).toString("base64");{const t=[];return e.forEach(n=>{t.push(String.fromCharCode(n))}),globalThis.btoa(t.join(""))}}async function Fc(e,t){var i;const n={...e,inputs:{question:e.inputs.question,image:Ic(new Uint8Array(await e.inputs.image.arrayBuffer()))}},r=(i=Vo(await W(n,t)))==null?void 0:i[0];if(!(typeof(r==null?void 0:r.answer)=="string"&&(typeof r.end=="number"||typeof r.end>"u")&&(typeof r.score=="number"||typeof r.score>"u")&&(typeof r.start=="number"||typeof r.start>"u")))throw new Z("Expected Array<{answer: string, end?: number, score?: number, start?: number}>");return r}async function Rc(e,t){var i;const n={...e,inputs:{question:e.inputs.question,image:Ic(new Uint8Array(await e.inputs.image.arrayBuffer()))}},r=(i=await W(n,t))==null?void 0:i[0];if(!(typeof(r==null?void 0:r.answer)=="string"&&typeof r.score=="number"))throw new Z("Expected Array<{answer: string, score: number}>");return r}const O=e=>a.jsx("button",{className:`${e.variant==="secondary"?"border-4 border-yellow-200":"bg-yellow-200"} py-6 text-center w-full ${e.disabled?"cursor-not-allowed opacity-50":""}`,disabled:e.disabled??!1,onClick:e.onClick,children:e.label??"Submit"}),Ac=e=>a.jsxs("div",{className:"w-full",children:[a.jsx("p",{className:"text-xl",children:e.label??"Input"}),e.input?a.jsx("audio",{className:"w-full",controls:!0,src:URL.createObjectURL(e.input)}):a.jsxs("label",{className:"bg-yellow-200 block cursor-pointer py-6 text-center w-full",children:["No file chosen",a.jsx("input",{accept:"audio/*",className:"hidden",onChange:t=>{t.target.files&&t.target.files[0]&&e.setInput(t.target.files[0])},type:"file"})]})]}),z=e=>{const t=(()=>{try{return JSON.stringify(e.output,void 0,2)}catch(n){if(n instanceof Error)return`Error during JSON.stringify: ${n.message}`}})();return a.jsxs("div",{className:"w-full",children:[a.jsx("p",{className:"text-xl",children:e.label??"Output"}),a.jsx("pre",{className:`bg-yellow-200 break-words p-6 select-text w-full whitespace-pre-wrap ${e.disabled?"cursor-wait opacity-50":""}`,children:t})]})},Np="audio-classification",Tp=e=>{const[t,n]=m.useState(),[r,l]=m.useState(!1),[i,o]=m.useState(),[u,s]=m.useState(),c=()=>{n(void 0),o(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const f=await mc({data:t,model:e.model});s(f)}catch(f){f instanceof Error&&o(f)}finally{l(!1)}}};return a.jsxs(m.Fragment,{children:[a.jsx(Ac,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:c,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&i?a.jsx(z,{disabled:r,output:i}):a.jsx(m.Fragment,{}),!i&&u?u.map(f=>a.jsx(z,{disabled:r,output:f},f.label)):a.jsx(m.Fragment,{})]})},Op="automatic-speech-recognition",Pp=e=>{const[t,n]=m.useState(),[r,l]=m.useState(!1),[i,o]=m.useState(),[u,s]=m.useState(),c=()=>{n(void 0),o(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const f=await hc({data:t,model:e.model});s(f)}catch(f){f instanceof Error&&o(f)}finally{l(!1)}}};return a.jsxs(m.Fragment,{children:[a.jsx(Ac,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:c,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&i?a.jsx(z,{disabled:r,output:i}):a.jsx(m.Fragment,{}),!i&&u?a.jsx(z,{disabled:r,output:u}):a.jsx(m.Fragment,{})]})},ee=e=>{const t=m.useRef(null);return m.useLayoutEffect(()=>{t.current&&(t.current.style.height="inherit",t.current.style.height=`${t.current.scrollHeight}px`)},[e.input]),a.jsxs("div",{className:"w-full",children:[a.jsx("p",{className:"text-xl",children:e.label??"Input"}),a.jsx("textarea",{className:"bg-yellow-200 py-6 resize-none text-center w-full",disabled:e.disabled??!1,onChange:n=>{!e.disabled&&e.setInput&&(n.target.value?e.setInput(n.target.value):e.setInput(""))},ref:t,rows:1,style:{height:t.current?`${t.current.scrollHeight}px`:"inherit"},value:e.input??""})]})},zp="conversational",Lp=e=>{const[t,n]=m.useState(),[r,l]=m.useState(!1),[i,o]=m.useState(),[u,s]=m.useState(),c=()=>{n(void 0),o(void 0),s(void 0)},h=()=>{t&&(l(!0),s(f=>f?{...f,conversation:{...f.conversation,past_user_inputs:[...f.conversation.past_user_inputs,t]}}:{conversation:{generated_responses:[],past_user_inputs:[t]},generated_text:"",warnings:[]}),n(void 0),xc({inputs:{generated_responses:u==null?void 0:u.conversation.generated_responses,past_user_inputs:u==null?void 0:u.conversation.past_user_inputs,text:t},model:e.model}).then(s).catch(o).finally(()=>l(!1)))};return a.jsxs(m.Fragment,{children:[a.jsx(ee,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t&&!u,onClick:c,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&i?a.jsx(z,{disabled:r,output:i}):a.jsx(m.Fragment,{}),!i&&u?Array.from({length:Math.max(u.conversation.generated_responses.length,u.conversation.past_user_inputs.length)}).map((f,v,g)=>a.jsxs(m.Fragment,{children:[u.conversation.generated_responses[g.length-v-1]?a.jsx(z,{disabled:r,label:`Output - Generated Response #${g.length-v}`,output:u.conversation.generated_responses[g.length-v-1]}):a.jsx(m.Fragment,{}),u.conversation.past_user_inputs[g.length-v-1]?a.jsx(ee,{disabled:!0,label:`Output - Past User Input #${g.length-v}`,input:u.conversation.past_user_inputs[g.length-v-1]}):a.jsx(m.Fragment,{})]},v)):a.jsx(m.Fragment,{})]})},pn=e=>a.jsxs("div",{className:"w-full",children:[a.jsx("p",{className:"text-xl",children:e.label??"Input"}),e.input?a.jsx("img",{className:"w-full",src:URL.createObjectURL(e.input)}):a.jsxs("label",{className:"bg-yellow-200 block cursor-pointer py-6 text-center w-full",children:["No file chosen",a.jsx("input",{accept:"image/*",className:"hidden",onChange:t=>{t.target.files&&t.target.files[0]&&e.setInput(t.target.files[0])},type:"file"})]})]}),Ip="document-question-answering",Fp=e=>{const[t,n]=m.useState(),[r,l]=m.useState(),[i,o]=m.useState(!1),[u,s]=m.useState(),[c,h]=m.useState(),f=()=>{n(void 0),l(void 0),s(void 0),h(void 0)},v=async()=>{if(t&&r){o(!0);try{const g=await Fc({inputs:{question:t,image:r},model:e.model});h(g)}catch(g){g instanceof Error&&s(g)}finally{o(!1)}}};return a.jsxs(m.Fragment,{children:[a.jsx(ee,{input:t,label:"Input - Question",setInput:n}),a.jsx(pn,{input:r,label:"Input - Image",setInput:l}),a.jsx(O,{label:"Clear",disabled:i||!r,onClick:f,variant:"secondary"}),a.jsx(O,{disabled:i||!r,onClick:v}),!i&&u?a.jsx(z,{disabled:i,output:u}):a.jsx(m.Fragment,{}),!u&&c?a.jsx(z,{disabled:i,output:c}):a.jsx(m.Fragment,{})]})},Rp="feature-extraction",Ap=e=>{const[t,n]=m.useState(),[r,l]=m.useState(!1),[i,o]=m.useState(),[u,s]=m.useState(),c=()=>{n(void 0),o(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const f=await kc({inputs:t,model:e.model});s(f)}catch(f){f instanceof Error&&o(f)}finally{l(!1)}}};return a.jsxs(m.Fragment,{children:[a.jsx(ee,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:c,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&i?a.jsx(z,{disabled:r,output:i}):a.jsx(m.Fragment,{}),!i&&u?a.jsx(z,{disabled:r,output:u}):a.jsx(m.Fragment,{})]})},Mp="fill-mask",Dp=e=>{const[t,n]=m.useState(),[r,l]=m.useState(!1),[i,o]=m.useState(),[u,s]=m.useState(),c=()=>{n(void 0),o(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const f=await Ec({inputs:t,model:e.model});s(f)}catch(f){f instanceof Error&&o(f)}finally{l(!1)}}};return a.jsxs(m.Fragment,{children:[a.jsx(ee,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:c,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&i?a.jsx(z,{disabled:r,output:i}):a.jsx(m.Fragment,{}),!i&&u?u.map(f=>a.jsx(z,{disabled:r,output:f},f.token_str)):a.jsx(m.Fragment,{})]})},$p="image-classification",Up=e=>{const[t,n]=m.useState(),[r,l]=m.useState(!1),[i,o]=m.useState(),[u,s]=m.useState(),c=()=>{n(void 0),o(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const f=await yc({data:t,model:e.model});s(f)}catch(f){f instanceof Error&&o(f)}finally{l(!1)}}};return a.jsxs(m.Fragment,{children:[a.jsx(pn,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:c,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&i?a.jsx(z,{disabled:r,output:i}):a.jsx(m.Fragment,{}),!i&&u?u.map(f=>a.jsx(z,{disabled:r,output:f},f.label)):a.jsx(m.Fragment,{})]})},Vp="image-segmentation",Bp=e=>{const[t,n]=m.useState(),[r,l]=m.useState(!1),[i,o]=m.useState(),[u,s]=m.useState(),c=()=>{n(void 0),o(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const f=await vc({data:t,model:e.model});s(f)}catch(f){f instanceof Error&&o(f)}finally{l(!1)}}};return a.jsxs(m.Fragment,{children:[a.jsx(pn,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:c,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&i?a.jsx(z,{disabled:r,output:i}):a.jsx(m.Fragment,{}),!i&&u?u.map(f=>a.jsx(z,{disabled:r,output:f},f.label)):a.jsx(m.Fragment,{})]})},Qp="image-to-text",Hp=e=>{const[t,n]=m.useState(),[r,l]=m.useState(!1),[i,o]=m.useState(),[u,s]=m.useState(),c=()=>{n(void 0),o(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const f=await gc({data:t,model:e.model});s(f)}catch(f){f instanceof Error&&o(f)}finally{l(!1)}}};return a.jsxs(m.Fragment,{children:[a.jsx(pn,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:c,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&i?a.jsx(z,{disabled:r,output:i}):a.jsx(m.Fragment,{}),!i&&u?a.jsx(z,{disabled:r,output:u}):a.jsx(m.Fragment,{})]})},Wp="object-detection",Kp=e=>{const[t,n]=m.useState(),[r,l]=m.useState(!1),[i,o]=m.useState(),[u,s]=m.useState(),c=()=>{n(void 0),o(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const f=await wc({data:t,model:e.model});s(f)}catch(f){f instanceof Error&&o(f)}finally{l(!1)}}};return a.jsxs(m.Fragment,{children:[a.jsx(pn,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:c,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&i?a.jsx(z,{disabled:r,output:i}):a.jsx(m.Fragment,{}),!i&&u?u.map(f=>a.jsx(z,{disabled:r,output:f},f.label)):a.jsx(m.Fragment,{})]})},Yp="question-answering",Xp=e=>{const[t,n]=m.useState(),[r,l]=m.useState(),[i,o]=m.useState(!1),[u,s]=m.useState(),[c,h]=m.useState(),f=()=>{n(void 0),l(void 0),s(void 0),h(void 0)},v=async()=>{if(t&&r){o(!0);try{const g=await Cc({inputs:{question:t,context:r},model:e.model});h(g)}catch(g){g instanceof Error&&s(g)}finally{o(!1)}}};return a.jsxs(m.Fragment,{children:[a.jsx(ee,{input:t,label:"Input - Question",setInput:n}),a.jsx(ee,{input:r,label:"Input - Context",setInput:l}),a.jsx(O,{label:"Clear",disabled:i||!t||!r,onClick:f,variant:"secondary"}),a.jsx(O,{disabled:i||!t||!r,onClick:v}),!i&&u?a.jsx(z,{disabled:i,output:u}):a.jsx(m.Fragment,{}),!u&&c?a.jsx(z,{disabled:i,output:c}):a.jsx(m.Fragment,{})]})},Gp="sentence-similarity",Zp=e=>{const[t,n]=m.useState(),r=Array.from({length:2}).map(()=>{}),[l,i]=m.useState(r),[o,u]=m.useState(!1),[s,c]=m.useState(),[h,f]=m.useState(),v=()=>{n(void 0),i(r),c(void 0),f(void 0)},g=async()=>{if(t&&l.every(Boolean)){u(!0);try{const w=await jc({inputs:{source_sentence:t,sentences:l},model:e.model});f(w)}catch(w){w instanceof Error&&c(w)}finally{u(!1)}}};return a.jsxs(m.Fragment,{children:[a.jsx(ee,{input:t,label:"Input - Source Sentence",setInput:n}),l.map((w,k)=>a.jsx(ee,{input:w,label:`Input - Sentence #${k+1}`,setInput:M=>i(p=>[...p.slice(0,k),M,...p.slice(k+1,p.length)])})),a.jsx(O,{disabled:o||!t||!l.every(Boolean),label:"Add Sentence",onClick:()=>i(w=>[...w,void 0])}),a.jsx(O,{disabled:o||!t||!l.every(Boolean),label:"Clear",onClick:v,variant:"secondary"}),a.jsx(O,{disabled:o||!t||!l.every(Boolean),onClick:g}),!o&&s?a.jsx(z,{disabled:o,output:s}):a.jsx(m.Fragment,{}),!s&&h?h.map((w,k)=>a.jsx(z,{disabled:o,label:`Output - Sentence #${k+1}`,output:w})):a.jsx(m.Fragment,{})]})},qp="summarization",Jp=e=>{const[t,n]=m.useState(),[r,l]=m.useState(!1),[i,o]=m.useState(),[u,s]=m.useState(),c=()=>{n(void 0),o(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const f=await _c({inputs:t,model:e.model});s(f)}catch(f){f instanceof Error&&o(f)}finally{l(!1)}}};return a.jsxs(m.Fragment,{children:[a.jsx(ee,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:c,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&i?a.jsx(z,{disabled:r,output:i}):a.jsx(m.Fragment,{}),!i&&u?a.jsx(z,{disabled:r,output:u}):a.jsx(m.Fragment,{})]})},bp=async e=>{const t=await e.text();try{const n=JSON.parse(t);try{return JSON.stringify(n,void 0,2)}catch(r){if(r instanceof Error)return`Error during JSON.stringify: ${r.message}`}}catch(n){if(n instanceof Error)return`Error during JSON.parse: ${n.message}`}},em=e=>{const[t,n]=m.useState();return m.useEffect(()=>{e.input&&bp(e.input).then(n)},[e.input]),a.jsxs("div",{className:"w-full",children:[a.jsx("p",{className:"text-xl",children:e.label??"Input"}),e.input?a.jsx("pre",{className:"bg-yellow-200 break-words p-6 select-text w-full whitespace-pre-wrap",children:t}):a.jsxs("label",{className:"bg-yellow-200 block cursor-pointer py-6 text-center w-full",children:["No file chosen",a.jsx("input",{accept:".json",className:"hidden",onChange:r=>{r.target.files&&r.target.files[0]&&e.setInput(r.target.files[0])},type:"file"})]})]})},tm="table-question-answering",nm=e=>{const[t,n]=m.useState(),[r,l]=m.useState(),[i,o]=m.useState(!1),[u,s]=m.useState(),[c,h]=m.useState(),f=()=>{n(void 0),l(void 0),s(void 0),h(void 0)},v=async()=>{if(t&&r){o(!0);try{const g=await Nc({inputs:{query:t,table:JSON.parse(await r.text()??"{}")},model:e.model});h(g)}catch(g){g instanceof Error&&s(g)}finally{o(!1)}}};return a.jsxs(m.Fragment,{children:[a.jsx(ee,{input:t,label:"Input - Query",setInput:n}),a.jsx(em,{input:r,label:"Input - Table",setInput:l}),a.jsx(O,{label:"Clear",disabled:i||!t,onClick:f,variant:"secondary"}),a.jsx(O,{disabled:i||!t,onClick:v}),!i&&u?a.jsx(z,{disabled:i,output:u}):a.jsx(m.Fragment,{}),!u&&c?a.jsx(z,{disabled:i,output:c}):a.jsx(m.Fragment,{})]})},rm="text-classification",lm=e=>{const[t,n]=m.useState(),[r,l]=m.useState(!1),[i,o]=m.useState(),[u,s]=m.useState(),c=()=>{n(void 0),o(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const f=await Tc({inputs:t,model:e.model});s(f)}catch(f){f instanceof Error&&o(f)}finally{l(!1)}}};return a.jsxs(m.Fragment,{children:[a.jsx(ee,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:c,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&i?a.jsx(z,{disabled:r,output:i}):a.jsx(m.Fragment,{}),!i&&u?u.map(f=>a.jsx(z,{disabled:r,output:f},f.label)):a.jsx(m.Fragment,{})]})},im="text-generation",om=e=>{const[t,n]=m.useState(),[r,l]=m.useState(!1),[i,o]=m.useState(),[u,s]=m.useState(),c=()=>{n(void 0),o(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const f=await Oc({inputs:t,model:e.model});s(f)}catch(f){f instanceof Error&&o(f)}finally{l(!1)}}};return a.jsxs(m.Fragment,{children:[a.jsx(ee,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:c,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&i?a.jsx(z,{disabled:r,output:i}):a.jsx(m.Fragment,{}),!i&&u?a.jsx(z,{disabled:r,output:u}):a.jsx(m.Fragment,{})]})},um=e=>a.jsxs("div",{className:"w-full",children:[a.jsx("p",{className:"text-xl",children:e.label??"Output"}),a.jsx("img",{className:`w-full ${e.disabled?"cursor-wait opacity-50":""}`,src:URL.createObjectURL(e.output)})]}),sm="text-to-image",am=e=>{const[t,n]=m.useState(),[r,l]=m.useState(!1),[i,o]=m.useState(),[u,s]=m.useState(),c=()=>{n(void 0),o(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const f=await Sc({inputs:t,model:e.model});s(f)}catch(f){f instanceof Error&&o(f)}finally{l(!1)}}};return a.jsxs(m.Fragment,{children:[a.jsx(ee,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:c,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&i?a.jsx(z,{disabled:r,output:i}):a.jsx(m.Fragment,{}),!i&&u?a.jsx(um,{disabled:r,output:u}):a.jsx(m.Fragment,{})]})},cm="token-classification",fm=e=>{const[t,n]=m.useState(),[r,l]=m.useState(!1),[i,o]=m.useState(),[u,s]=m.useState(),c=()=>{n(void 0),o(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const f=await Pc({inputs:t,model:e.model});s(f)}catch(f){f instanceof Error&&o(f)}finally{l(!1)}}};return a.jsxs(m.Fragment,{children:[a.jsx(ee,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:c,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&i?a.jsx(z,{disabled:r,output:i}):a.jsx(m.Fragment,{}),!i&&u?u.map(f=>a.jsx(z,{disabled:r,output:f},f.word)):a.jsx(m.Fragment,{})]})},dm="translation",pm=e=>{const[t,n]=m.useState(),[r,l]=m.useState(!1),[i,o]=m.useState(),[u,s]=m.useState(),c=()=>{n(void 0),o(void 0),s(void 0)},h=async()=>{if(t){l(!0);try{const f=await zc({inputs:t,model:e.model});s(f)}catch(f){f instanceof Error&&o(f)}finally{l(!1)}}};return a.jsxs(m.Fragment,{children:[a.jsx(ee,{input:t,setInput:n}),a.jsx(O,{label:"Clear",disabled:r||!t,onClick:c,variant:"secondary"}),a.jsx(O,{disabled:r||!t,onClick:h}),!r&&i?a.jsx(z,{disabled:r,output:i}):a.jsx(m.Fragment,{}),!i&&u?a.jsx(z,{disabled:r,output:u}):a.jsx(m.Fragment,{})]})},mm="visual-question-answering",hm=e=>{const[t,n]=m.useState(),[r,l]=m.useState(),[i,o]=m.useState(!1),[u,s]=m.useState(),[c,h]=m.useState(),f=()=>{n(void 0),l(void 0),s(void 0),h(void 0)},v=async()=>{if(t&&r){o(!0);try{const g=await Rc({inputs:{question:t,image:r},model:e.model});h(g)}catch(g){g instanceof Error&&s(g)}finally{o(!1)}}};return a.jsxs(m.Fragment,{children:[a.jsx(ee,{input:t,label:"Input - Question",setInput:n}),a.jsx(pn,{input:r,label:"Input - Image",setInput:l}),a.jsx(O,{label:"Clear",disabled:i||!r,onClick:f,variant:"secondary"}),a.jsx(O,{disabled:i||!r,onClick:v}),!i&&u?a.jsx(z,{disabled:i,output:u}):a.jsx(m.Fragment,{}),!u&&c?a.jsx(z,{disabled:i,output:c}):a.jsx(m.Fragment,{})]})},ym="zero-shot-classification",vm=e=>{const[t,n]=m.useState(),r=Array.from({length:2}).map(()=>{}),[l,i]=m.useState(r),[o,u]=m.useState(!1),[s,c]=m.useState(),[h,f]=m.useState(),v=()=>{n(void 0),i(r),c(void 0),f(void 0)},g=async()=>{if(t&&l.every(Boolean)){u(!0);try{const w=await Lc({inputs:t,model:e.model,parameters:{candidate_labels:l}});f(w)}catch(w){w instanceof Error&&c(w)}finally{u(!1)}}};return a.jsxs(m.Fragment,{children:[a.jsx(ee,{input:t,setInput:n}),l.map((w,k)=>a.jsx(ee,{input:w,label:`Parameter - Candidate Label #${k+1}`,setInput:M=>i(p=>[...p.slice(0,k),M,...p.slice(k+1,p.length)])})),a.jsx(O,{disabled:o||!t||!l.every(Boolean),label:"Add Candidate Label",onClick:()=>i(w=>[...w,void 0])}),a.jsx(O,{disabled:o||!t||!l.every(Boolean),label:"Clear",onClick:v,variant:"secondary"}),a.jsx(O,{disabled:o||!t||!l.every(Boolean),onClick:g}),!o&&s?a.jsx(z,{disabled:o,output:s}):a.jsx(m.Fragment,{}),!s&&h?h.map((w,k)=>a.jsx(z,{disabled:o,output:w})):a.jsx(m.Fragment,{})]})},gm=[Np,Op,zp,Ip,Rp,Mp,$p,Vp,Qp,Wp,Yp,Gp,qp,tm,rm,im,sm,cm,dm,mm,ym],wm=e=>{if(!e.model||!e.task)return a.jsx(m.Fragment,{});switch(e.task){case"audio-classification":return a.jsx(Tp,{model:e.model});case"automatic-speech-recognition":return a.jsx(Pp,{model:e.model});case"conversational":return a.jsx(Lp,{model:e.model});case"document-question-answering":return a.jsx(Fp,{model:e.model});case"feature-extraction":return a.jsx(Ap,{model:e.model});case"fill-mask":return a.jsx(Dp,{model:e.model});case"image-classification":return a.jsx(Up,{model:e.model});case"image-segmentation":return a.jsx(Bp,{model:e.model});case"image-to-text":return a.jsx(Hp,{model:e.model});case"object-detection":return a.jsx(Kp,{model:e.model});case"question-answering":return a.jsx(Xp,{model:e.model});case"sentence-similarity":return a.jsx(Zp,{model:e.model});case"summarization":return a.jsx(Jp,{model:e.model});case"table-question-answering":return a.jsx(nm,{model:e.model});case"text-classification":return a.jsx(lm,{model:e.model});case"text-generation":return a.jsx(om,{model:e.model});case"text-to-image":return a.jsx(am,{model:e.model});case"token-classification":return a.jsx(fm,{model:e.model});case"translation":return a.jsx(pm,{model:e.model});case"visual-question-answering":return a.jsx(hm,{model:e.model});case"zero-shot-classification":return a.jsx(vm,{model:e.model});default:return a.jsx(m.Fragment,{})}},Sm=e=>a.jsxs("div",{className:"w-full",children:[a.jsx("p",{className:"text-xl",children:"Task"}),a.jsxs("select",{className:"bg-yellow-200 cursor-pointer py-6 text-center w-full",onChange:t=>e.onTaskSelect(t.target.value),placeholder:"Select a task",value:e.task,children:[a.jsx("option",{children:"Select a task"}),gm.map(t=>a.jsx("option",{value:t,children:t},t))]})]}),Jl={},xm=async e=>{if(Jl[e])return Jl[e];const t=[];for await(const n of gp({search:{task:e}}))t.push(n);return t.sort((n,r)=>n.downloads>r.downloads?-1:n.downloads<r.downloads?1:n.likes>r.likes?-1:n.likes<r.likes?1:n.name>r.name?-1:n.name<r.name?1:0),Jl[e]=t,t},km=e=>{const[t,n]=m.useState(!1),[r,l]=m.useState([]);return m.useEffect(()=>{l([]),e.task&&(n(!0),xm(e.task).then(i=>l(i)).finally(()=>n(!1)))},[e.task]),r.length>0?a.jsxs("div",{className:"w-full",children:[a.jsx("p",{className:"text-xl",children:"Model"}),a.jsxs("select",{className:"bg-yellow-200 cursor-pointer py-6 text-center w-full",onChange:i=>e.onModelSelect(i.target.value),placeholder:"Select a model",value:e.model,children:[a.jsx("option",{children:"Select a model"}),r.map(i=>a.jsx("option",{value:i.name,children:i.name},i.name))]}),e.model?a.jsx("div",{className:"font-bold py-6 text-center text-yellow-200",children:a.jsx("a",{href:`https://huggingface.co/${e.model}`,rel:"noopener noferrer",target:"_blank",children:"View model on 🤗"})}):a.jsx(m.Fragment,{})]}):a.jsx("p",{className:"text-center w-full",children:e.task?t?"Loading models for this task":"No models available for this task":"Select a task to view available models"})},Em=()=>{const[e,t]=m.useState(),[n,r]=m.useState(),l=i=>{r(void 0),t(i)};return a.jsx("div",{className:"bg-yellow-500 flex flex-col h-full items-center min-h-screen min-w-screen overflow-auto w-full",children:a.jsxs("div",{className:"flex flex-col items-center justify-center py-24 space-y-12 w-2/3 lg:w-1/3",children:[a.jsx("header",{className:"text-center text-6xl",children:"🤗"}),a.jsx(Sm,{onTaskSelect:l,task:e}),a.jsx(km,{model:n,onModelSelect:r,task:e}),a.jsx(wm,{model:n,task:e})]})})};const Cm=()=>{const e="root",t=document.getElementById(e);if(t){const n=dc(t),r=a.jsx(m.StrictMode,{children:a.jsx(Em,{})});n.render(r)}};Cm(); diff --git a/spaces/amin2809/rvc-models2023/app.py b/spaces/amin2809/rvc-models2023/app.py deleted file mode 100644 index 3eea1979c8f7338d48722ad8ef74271a77fd86b2..0000000000000000000000000000000000000000 --- a/spaces/amin2809/rvc-models2023/app.py +++ /dev/null @@ -1,188 +0,0 @@ -import os -import json -import argparse -import traceback -import logging -import gradio as gr -import numpy as np -import librosa -import torch -import asyncio -import edge_tts -from datetime import datetime -from fairseq import checkpoint_utils -from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono -from vc_infer_pipeline import VC -from config import ( - is_half, - device -) -logging.getLogger("numba").setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces - -def create_vc_fn(tgt_sr, net_g, vc, if_f0, file_index, file_big_npy): - def vc_fn( - input_audio, - f0_up_key, - f0_method, - index_rate, - tts_mode, - tts_text, - tts_voice - ): - try: - if tts_mode: - if len(tts_text) > 100 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - else: - if args.files: - audio, sr = librosa.load(input_audio, sr=16000, mono=True) - else: - if input_audio is None: - return "You need to upload an audio", None - sampling_rate, audio = input_audio - duration = audio.shape[0] / sampling_rate - if duration > 60 and limitation: - return "Please upload an audio file that is less than 2000 seconds. If you need to generate a longer audio file, please use Colab.", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - times = [0, 0, 0] - f0_up_key = int(f0_up_key) - audio_opt = vc.pipeline( - hubert_model, - net_g, - 0, - audio, - times, - f0_up_key, - f0_method, - file_index, - file_big_npy, - index_rate, - if_f0, - ) - print( - f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - ) - return "Success", (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, (None, None) - return vc_fn - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(device) - if is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def change_to_tts_mode(tts_mode): - if tts_mode: - return gr.Audio.update(visible=False), gr.Textbox.update(visible=True), gr.Dropdown.update(visible=True) - else: - return gr.Audio.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False) - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--api', action="store_true", default=False) - parser.add_argument("--share", action="store_true", default=False, help="share gradio app") - parser.add_argument("--files", action="store_true", default=False, help="load audio from path") - args, unknown = parser.parse_known_args() - load_hubert() - models = [] - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - with open("weights/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for name, info in models_info.items(): - if not info['enable']: - continue - title = info['title'] - author = info.get("author", None) - cover = f"weights/{name}/{info['cover']}" - index = f"weights/{name}/{info['feature_retrieval_library']}" - npy = f"weights/{name}/{info['feature_file']}" - cpt = torch.load(f"weights/{name}/{name}.pth", map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) # 不加这一行清不干净, 真奇葩 - net_g.eval().to(device) - if is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, device, is_half) - models.append((name, title, author, cover, create_vc_fn(tgt_sr, net_g, vc, if_f0, index, npy))) - with gr.Blocks() as app: - gr.Markdown( - "# <center> RVC Models\n" - "## <center> The input audio should be clean and pure voice without background music.\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=ardha27.Rvc-Models)\n\n" - "[![image](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/12rbZk9CoXD1m84dqBW5IKMBjiVY6tcoj?usp=share_link)\n\n" - "[![Duplicate this Space](https://huggingface.co/datasets/huggingface/badges/raw/main/duplicate-this-space-sm-dark.svg)](https://huggingface.co/spaces/ardha27pi/rvc-models?duplicate=true)\n\n" - "[![Train Own Voice](https://badgen.net/badge/icon/github?icon=github&label=Train%20Voice)](https://github.com/ardha27/AI-Song-Cover-RVC)\n\n" - "[![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/R6R7AH1FA)\n\n" - ) - with gr.Tabs(): - for (name, title, author, cover, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '<div align="center">' - f'<div>{title}</div>\n'+ - (f'<div>Model author: {author}</div>' if author else "")+ - (f'<img style="width:auto;height:300px;" src="file/{cover}">' if cover else "")+ - '</div>' - ) - with gr.Row(): - with gr.Column(): - if args.files: - vc_input = gr.Textbox(label="Input audio path") - else: - vc_input = gr.Audio(label="Input audio"+' (less than 20 seconds)' if limitation else '') - vc_transpose = gr.Number(label="Transpose", value=0) - vc_f0method = gr.Radio( - label="Pitch extraction algorithm, PM is fast but Harvest is better for low frequencies", - choices=["pm", "harvest"], - value="pm", - interactive=True, - ) - vc_index_ratio = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - value=0.6, - interactive=True, - ) - tts_mode = gr.Checkbox(label="tts (use edge-tts as input)", value=False) - tts_text = gr.Textbox(visible=False,label="TTS text (100 words limitation)" if limitation else "TTS text") - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - vc_submit = gr.Button("Generate", variant="primary") - with gr.Column(): - vc_output1 = gr.Textbox(label="Output Message") - vc_output2 = gr.Audio(label="Output Audio") - vc_submit.click(vc_fn, [vc_input, vc_transpose, vc_f0method, vc_index_ratio, tts_mode, tts_text, tts_voice], [vc_output1, vc_output2]) - tts_mode.change(change_to_tts_mode, [tts_mode], [vc_input, tts_text, tts_voice]) - app.queue(concurrency_count=1, max_size=20, api_open=args.api).launch(share=args.share) \ No newline at end of file diff --git a/spaces/amin2809/rvc-models2023/infer_pack/models_onnx.py b/spaces/amin2809/rvc-models2023/infer_pack/models_onnx.py deleted file mode 100644 index 3cdae2f7f8591a1e43b1d8520baa37b7e9744d72..0000000000000000000000000000000000000000 --- a/spaces/amin2809/rvc-models2023/infer_pack/models_onnx.py +++ /dev/null @@ -1,849 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder256Sim(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - x = self.proj(x) * x_mask - return x, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, pitch, nsff0, sid, rnd, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class SynthesizerTrnMs256NSFsid_sim(nn.Module): - """ - Synthesizer for Training - """ - - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - # hop_length, - gin_channels=0, - use_sdp=True, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256Sim( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - is_half=kwargs["is_half"], - ) - - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, ds, max_len=None - ): # y是spec不需要了现在 - g = self.emb_g(ds.unsqueeze(0)).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - x, x_mask = self.enc_p(phone, pitch, phone_lengths) - x = self.flow(x, x_mask, g=g, reverse=True) - o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/aodianyun/panoptic-segment-anything/segment_anything/segment_anything/utils/onnx.py b/spaces/aodianyun/panoptic-segment-anything/segment_anything/segment_anything/utils/onnx.py deleted file mode 100644 index 4297b31291e036700d6ad0b818afb7dd72da3054..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/panoptic-segment-anything/segment_anything/segment_anything/utils/onnx.py +++ /dev/null @@ -1,144 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -from torch.nn import functional as F - -from typing import Tuple - -from ..modeling import Sam -from .amg import calculate_stability_score - - -class SamOnnxModel(nn.Module): - """ - This model should not be called directly, but is used in ONNX export. - It combines the prompt encoder, mask decoder, and mask postprocessing of Sam, - with some functions modified to enable model tracing. Also supports extra - options controlling what information. See the ONNX export script for details. - """ - - def __init__( - self, - model: Sam, - return_single_mask: bool, - use_stability_score: bool = False, - return_extra_metrics: bool = False, - ) -> None: - super().__init__() - self.mask_decoder = model.mask_decoder - self.model = model - self.img_size = model.image_encoder.img_size - self.return_single_mask = return_single_mask - self.use_stability_score = use_stability_score - self.stability_score_offset = 1.0 - self.return_extra_metrics = return_extra_metrics - - @staticmethod - def resize_longest_image_size( - input_image_size: torch.Tensor, longest_side: int - ) -> torch.Tensor: - input_image_size = input_image_size.to(torch.float32) - scale = longest_side / torch.max(input_image_size) - transformed_size = scale * input_image_size - transformed_size = torch.floor(transformed_size + 0.5).to(torch.int64) - return transformed_size - - def _embed_points(self, point_coords: torch.Tensor, point_labels: torch.Tensor) -> torch.Tensor: - point_coords = point_coords + 0.5 - point_coords = point_coords / self.img_size - point_embedding = self.model.prompt_encoder.pe_layer._pe_encoding(point_coords) - point_labels = point_labels.unsqueeze(-1).expand_as(point_embedding) - - point_embedding = point_embedding * (point_labels != -1) - point_embedding = point_embedding + self.model.prompt_encoder.not_a_point_embed.weight * ( - point_labels == -1 - ) - - for i in range(self.model.prompt_encoder.num_point_embeddings): - point_embedding = point_embedding + self.model.prompt_encoder.point_embeddings[ - i - ].weight * (point_labels == i) - - return point_embedding - - def _embed_masks(self, input_mask: torch.Tensor, has_mask_input: torch.Tensor) -> torch.Tensor: - mask_embedding = has_mask_input * self.model.prompt_encoder.mask_downscaling(input_mask) - mask_embedding = mask_embedding + ( - 1 - has_mask_input - ) * self.model.prompt_encoder.no_mask_embed.weight.reshape(1, -1, 1, 1) - return mask_embedding - - def mask_postprocessing(self, masks: torch.Tensor, orig_im_size: torch.Tensor) -> torch.Tensor: - masks = F.interpolate( - masks, - size=(self.img_size, self.img_size), - mode="bilinear", - align_corners=False, - ) - - prepadded_size = self.resize_longest_image_size(orig_im_size, self.img_size) - masks = masks[..., : int(prepadded_size[0]), : int(prepadded_size[1])] - - orig_im_size = orig_im_size.to(torch.int64) - h, w = orig_im_size[0], orig_im_size[1] - masks = F.interpolate(masks, size=(h, w), mode="bilinear", align_corners=False) - return masks - - def select_masks( - self, masks: torch.Tensor, iou_preds: torch.Tensor, num_points: int - ) -> Tuple[torch.Tensor, torch.Tensor]: - # Determine if we should return the multiclick mask or not from the number of points. - # The reweighting is used to avoid control flow. - score_reweight = torch.tensor( - [[1000] + [0] * (self.model.mask_decoder.num_mask_tokens - 1)] - ).to(iou_preds.device) - score = iou_preds + (num_points - 2.5) * score_reweight - best_idx = torch.argmax(score, dim=1) - masks = masks[torch.arange(masks.shape[0]), best_idx, :, :].unsqueeze(1) - iou_preds = iou_preds[torch.arange(masks.shape[0]), best_idx].unsqueeze(1) - - return masks, iou_preds - - @torch.no_grad() - def forward( - self, - image_embeddings: torch.Tensor, - point_coords: torch.Tensor, - point_labels: torch.Tensor, - mask_input: torch.Tensor, - has_mask_input: torch.Tensor, - orig_im_size: torch.Tensor, - ): - sparse_embedding = self._embed_points(point_coords, point_labels) - dense_embedding = self._embed_masks(mask_input, has_mask_input) - - masks, scores = self.model.mask_decoder.predict_masks( - image_embeddings=image_embeddings, - image_pe=self.model.prompt_encoder.get_dense_pe(), - sparse_prompt_embeddings=sparse_embedding, - dense_prompt_embeddings=dense_embedding, - ) - - if self.use_stability_score: - scores = calculate_stability_score( - masks, self.model.mask_threshold, self.stability_score_offset - ) - - if self.return_single_mask: - masks, scores = self.select_masks(masks, scores, point_coords.shape[1]) - - upscaled_masks = self.mask_postprocessing(masks, orig_im_size) - - if self.return_extra_metrics: - stability_scores = calculate_stability_score( - upscaled_masks, self.model.mask_threshold, self.stability_score_offset - ) - areas = (upscaled_masks > self.model.mask_threshold).sum(-1).sum(-1) - return upscaled_masks, scores, stability_scores, areas, masks - - return upscaled_masks, scores, masks diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/dateutil/tz/win.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/dateutil/tz/win.py deleted file mode 100644 index cde07ba792c40903f0c334839140173b39fd8124..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/dateutil/tz/win.py +++ /dev/null @@ -1,370 +0,0 @@ -# -*- coding: utf-8 -*- -""" -This module provides an interface to the native time zone data on Windows, -including :py:class:`datetime.tzinfo` implementations. - -Attempting to import this module on a non-Windows platform will raise an -:py:obj:`ImportError`. -""" -# This code was originally contributed by Jeffrey Harris. -import datetime -import struct - -from six.moves import winreg -from six import text_type - -try: - import ctypes - from ctypes import wintypes -except ValueError: - # ValueError is raised on non-Windows systems for some horrible reason. - raise ImportError("Running tzwin on non-Windows system") - -from ._common import tzrangebase - -__all__ = ["tzwin", "tzwinlocal", "tzres"] - -ONEWEEK = datetime.timedelta(7) - -TZKEYNAMENT = r"SOFTWARE\Microsoft\Windows NT\CurrentVersion\Time Zones" -TZKEYNAME9X = r"SOFTWARE\Microsoft\Windows\CurrentVersion\Time Zones" -TZLOCALKEYNAME = r"SYSTEM\CurrentControlSet\Control\TimeZoneInformation" - - -def _settzkeyname(): - handle = winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE) - try: - winreg.OpenKey(handle, TZKEYNAMENT).Close() - TZKEYNAME = TZKEYNAMENT - except WindowsError: - TZKEYNAME = TZKEYNAME9X - handle.Close() - return TZKEYNAME - - -TZKEYNAME = _settzkeyname() - - -class tzres(object): - """ - Class for accessing ``tzres.dll``, which contains timezone name related - resources. - - .. versionadded:: 2.5.0 - """ - p_wchar = ctypes.POINTER(wintypes.WCHAR) # Pointer to a wide char - - def __init__(self, tzres_loc='tzres.dll'): - # Load the user32 DLL so we can load strings from tzres - user32 = ctypes.WinDLL('user32') - - # Specify the LoadStringW function - user32.LoadStringW.argtypes = (wintypes.HINSTANCE, - wintypes.UINT, - wintypes.LPWSTR, - ctypes.c_int) - - self.LoadStringW = user32.LoadStringW - self._tzres = ctypes.WinDLL(tzres_loc) - self.tzres_loc = tzres_loc - - def load_name(self, offset): - """ - Load a timezone name from a DLL offset (integer). - - >>> from dateutil.tzwin import tzres - >>> tzr = tzres() - >>> print(tzr.load_name(112)) - 'Eastern Standard Time' - - :param offset: - A positive integer value referring to a string from the tzres dll. - - .. note:: - - Offsets found in the registry are generally of the form - ``@tzres.dll,-114``. The offset in this case is 114, not -114. - - """ - resource = self.p_wchar() - lpBuffer = ctypes.cast(ctypes.byref(resource), wintypes.LPWSTR) - nchar = self.LoadStringW(self._tzres._handle, offset, lpBuffer, 0) - return resource[:nchar] - - def name_from_string(self, tzname_str): - """ - Parse strings as returned from the Windows registry into the time zone - name as defined in the registry. - - >>> from dateutil.tzwin import tzres - >>> tzr = tzres() - >>> print(tzr.name_from_string('@tzres.dll,-251')) - 'Dateline Daylight Time' - >>> print(tzr.name_from_string('Eastern Standard Time')) - 'Eastern Standard Time' - - :param tzname_str: - A timezone name string as returned from a Windows registry key. - - :return: - Returns the localized timezone string from tzres.dll if the string - is of the form `@tzres.dll,-offset`, else returns the input string. - """ - if not tzname_str.startswith('@'): - return tzname_str - - name_splt = tzname_str.split(',-') - try: - offset = int(name_splt[1]) - except: - raise ValueError("Malformed timezone string.") - - return self.load_name(offset) - - -class tzwinbase(tzrangebase): - """tzinfo class based on win32's timezones available in the registry.""" - def __init__(self): - raise NotImplementedError('tzwinbase is an abstract base class') - - def __eq__(self, other): - # Compare on all relevant dimensions, including name. - if not isinstance(other, tzwinbase): - return NotImplemented - - return (self._std_offset == other._std_offset and - self._dst_offset == other._dst_offset and - self._stddayofweek == other._stddayofweek and - self._dstdayofweek == other._dstdayofweek and - self._stdweeknumber == other._stdweeknumber and - self._dstweeknumber == other._dstweeknumber and - self._stdhour == other._stdhour and - self._dsthour == other._dsthour and - self._stdminute == other._stdminute and - self._dstminute == other._dstminute and - self._std_abbr == other._std_abbr and - self._dst_abbr == other._dst_abbr) - - @staticmethod - def list(): - """Return a list of all time zones known to the system.""" - with winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE) as handle: - with winreg.OpenKey(handle, TZKEYNAME) as tzkey: - result = [winreg.EnumKey(tzkey, i) - for i in range(winreg.QueryInfoKey(tzkey)[0])] - return result - - def display(self): - """ - Return the display name of the time zone. - """ - return self._display - - def transitions(self, year): - """ - For a given year, get the DST on and off transition times, expressed - always on the standard time side. For zones with no transitions, this - function returns ``None``. - - :param year: - The year whose transitions you would like to query. - - :return: - Returns a :class:`tuple` of :class:`datetime.datetime` objects, - ``(dston, dstoff)`` for zones with an annual DST transition, or - ``None`` for fixed offset zones. - """ - - if not self.hasdst: - return None - - dston = picknthweekday(year, self._dstmonth, self._dstdayofweek, - self._dsthour, self._dstminute, - self._dstweeknumber) - - dstoff = picknthweekday(year, self._stdmonth, self._stddayofweek, - self._stdhour, self._stdminute, - self._stdweeknumber) - - # Ambiguous dates default to the STD side - dstoff -= self._dst_base_offset - - return dston, dstoff - - def _get_hasdst(self): - return self._dstmonth != 0 - - @property - def _dst_base_offset(self): - return self._dst_base_offset_ - - -class tzwin(tzwinbase): - """ - Time zone object created from the zone info in the Windows registry - - These are similar to :py:class:`dateutil.tz.tzrange` objects in that - the time zone data is provided in the format of a single offset rule - for either 0 or 2 time zone transitions per year. - - :param: name - The name of a Windows time zone key, e.g. "Eastern Standard Time". - The full list of keys can be retrieved with :func:`tzwin.list`. - """ - - def __init__(self, name): - self._name = name - - with winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE) as handle: - tzkeyname = text_type("{kn}\\{name}").format(kn=TZKEYNAME, name=name) - with winreg.OpenKey(handle, tzkeyname) as tzkey: - keydict = valuestodict(tzkey) - - self._std_abbr = keydict["Std"] - self._dst_abbr = keydict["Dlt"] - - self._display = keydict["Display"] - - # See http://ww_winreg.jsiinc.com/SUBA/tip0300/rh0398.htm - tup = struct.unpack("=3l16h", keydict["TZI"]) - stdoffset = -tup[0]-tup[1] # Bias + StandardBias * -1 - dstoffset = stdoffset-tup[2] # + DaylightBias * -1 - self._std_offset = datetime.timedelta(minutes=stdoffset) - self._dst_offset = datetime.timedelta(minutes=dstoffset) - - # for the meaning see the win32 TIME_ZONE_INFORMATION structure docs - # http://msdn.microsoft.com/en-us/library/windows/desktop/ms725481(v=vs.85).aspx - (self._stdmonth, - self._stddayofweek, # Sunday = 0 - self._stdweeknumber, # Last = 5 - self._stdhour, - self._stdminute) = tup[4:9] - - (self._dstmonth, - self._dstdayofweek, # Sunday = 0 - self._dstweeknumber, # Last = 5 - self._dsthour, - self._dstminute) = tup[12:17] - - self._dst_base_offset_ = self._dst_offset - self._std_offset - self.hasdst = self._get_hasdst() - - def __repr__(self): - return "tzwin(%s)" % repr(self._name) - - def __reduce__(self): - return (self.__class__, (self._name,)) - - -class tzwinlocal(tzwinbase): - """ - Class representing the local time zone information in the Windows registry - - While :class:`dateutil.tz.tzlocal` makes system calls (via the :mod:`time` - module) to retrieve time zone information, ``tzwinlocal`` retrieves the - rules directly from the Windows registry and creates an object like - :class:`dateutil.tz.tzwin`. - - Because Windows does not have an equivalent of :func:`time.tzset`, on - Windows, :class:`dateutil.tz.tzlocal` instances will always reflect the - time zone settings *at the time that the process was started*, meaning - changes to the machine's time zone settings during the run of a program - on Windows will **not** be reflected by :class:`dateutil.tz.tzlocal`. - Because ``tzwinlocal`` reads the registry directly, it is unaffected by - this issue. - """ - def __init__(self): - with winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE) as handle: - with winreg.OpenKey(handle, TZLOCALKEYNAME) as tzlocalkey: - keydict = valuestodict(tzlocalkey) - - self._std_abbr = keydict["StandardName"] - self._dst_abbr = keydict["DaylightName"] - - try: - tzkeyname = text_type('{kn}\\{sn}').format(kn=TZKEYNAME, - sn=self._std_abbr) - with winreg.OpenKey(handle, tzkeyname) as tzkey: - _keydict = valuestodict(tzkey) - self._display = _keydict["Display"] - except OSError: - self._display = None - - stdoffset = -keydict["Bias"]-keydict["StandardBias"] - dstoffset = stdoffset-keydict["DaylightBias"] - - self._std_offset = datetime.timedelta(minutes=stdoffset) - self._dst_offset = datetime.timedelta(minutes=dstoffset) - - # For reasons unclear, in this particular key, the day of week has been - # moved to the END of the SYSTEMTIME structure. - tup = struct.unpack("=8h", keydict["StandardStart"]) - - (self._stdmonth, - self._stdweeknumber, # Last = 5 - self._stdhour, - self._stdminute) = tup[1:5] - - self._stddayofweek = tup[7] - - tup = struct.unpack("=8h", keydict["DaylightStart"]) - - (self._dstmonth, - self._dstweeknumber, # Last = 5 - self._dsthour, - self._dstminute) = tup[1:5] - - self._dstdayofweek = tup[7] - - self._dst_base_offset_ = self._dst_offset - self._std_offset - self.hasdst = self._get_hasdst() - - def __repr__(self): - return "tzwinlocal()" - - def __str__(self): - # str will return the standard name, not the daylight name. - return "tzwinlocal(%s)" % repr(self._std_abbr) - - def __reduce__(self): - return (self.__class__, ()) - - -def picknthweekday(year, month, dayofweek, hour, minute, whichweek): - """ dayofweek == 0 means Sunday, whichweek 5 means last instance """ - first = datetime.datetime(year, month, 1, hour, minute) - - # This will work if dayofweek is ISO weekday (1-7) or Microsoft-style (0-6), - # Because 7 % 7 = 0 - weekdayone = first.replace(day=((dayofweek - first.isoweekday()) % 7) + 1) - wd = weekdayone + ((whichweek - 1) * ONEWEEK) - if (wd.month != month): - wd -= ONEWEEK - - return wd - - -def valuestodict(key): - """Convert a registry key's values to a dictionary.""" - dout = {} - size = winreg.QueryInfoKey(key)[1] - tz_res = None - - for i in range(size): - key_name, value, dtype = winreg.EnumValue(key, i) - if dtype == winreg.REG_DWORD or dtype == winreg.REG_DWORD_LITTLE_ENDIAN: - # If it's a DWORD (32-bit integer), it's stored as unsigned - convert - # that to a proper signed integer - if value & (1 << 31): - value = value - (1 << 32) - elif dtype == winreg.REG_SZ: - # If it's a reference to the tzres DLL, load the actual string - if value.startswith('@tzres'): - tz_res = tz_res or tzres() - value = tz_res.name_from_string(value) - - value = value.rstrip('\x00') # Remove trailing nulls - - dout[key_name] = value - - return dout diff --git a/spaces/ashutosh1919/quantum-perceptron/gradio_app.py b/spaces/ashutosh1919/quantum-perceptron/gradio_app.py deleted file mode 100644 index f82dcf8a2db9d44018d6cd41552faeae3a8f3d61..0000000000000000000000000000000000000000 --- a/spaces/ashutosh1919/quantum-perceptron/gradio_app.py +++ /dev/null @@ -1,51 +0,0 @@ -import gradio as gr -import matplotlib -import matplotlib.pyplot as plt -from quantum_perceptron import Perceptron - -matplotlib.pyplot.switch_backend('Agg') - - -def run_perceptron( - num_qubits: int, - input_value: int, - weight_value: int, - num_iters: int -): - p = Perceptron(num_qubits, weight_value, input_value) - counts = p.measure_circuit(num_iters) - counts.setdefault('0', 0) - counts.setdefault('1', 0) - prob_1 = counts['1'] / num_iters - freq_hist = plt.figure() - plt.bar(counts.keys(), counts.values(), width=0.5) - for i, v in enumerate(list(counts.values())): - plt.text(i, v+10, v) - plt.xlabel('Measured State') - plt.ylabel('Frequency of Measured State') - freq_hist.subplots_adjust(top=0.2) - freq_hist.tight_layout() - return prob_1, freq_hist - - -app_inputs = [ - gr.Slider(1, 9, value=2, step=1, label="Number of Qubits"), - gr.Number(value=12, label="Input Value", precision=0), - gr.Number(value=13, label="Weight Value", precision=0), - gr.Number(value=1000, - label="Number of Measurement Iterations", - precision=0), -] - -app_outputs = [ - gr.Number(precision=2, label="Probability of Firing Perceptron"), - gr.Plot(label="Distribution of Measurement Frequencies") -] - -demo = gr.Interface( - fn=run_perceptron, - inputs=app_inputs, - outputs=app_outputs, - title="Simulate Quantum Perceptron", -) -demo.launch() diff --git a/spaces/asigalov61/Allegro-Music-Transformer/TMIDIX.py b/spaces/asigalov61/Allegro-Music-Transformer/TMIDIX.py deleted file mode 100644 index f023e673586b78b1fb8e337b11d48978343b2a9f..0000000000000000000000000000000000000000 --- a/spaces/asigalov61/Allegro-Music-Transformer/TMIDIX.py +++ /dev/null @@ -1,3202 +0,0 @@ -#! /usr/bin/python3 - - -r'''############################################################################### -################################################################################### -# -# -# Tegridy MIDI X Module (TMIDI X / tee-midi eks) -# Version 1.0 -# -# NOTE: TMIDI X Module starts after the partial MIDI.py module @ line 1342 -# -# Based upon MIDI.py module v.6.7. by Peter Billam / pjb.com.au -# -# Project Los Angeles -# -# Tegridy Code 2021 -# -# https://github.com/Tegridy-Code/Project-Los-Angeles -# -# -################################################################################### -################################################################################### -# Copyright 2021 Project Los Angeles / Tegridy Code -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -################################################################################### -################################################################################### -# -# PARTIAL MIDI.py Module v.6.7. by Peter Billam -# Please see TMIDI 2.3/tegridy-tools repo for full MIDI.py module code -# -# Or you can always download the latest full version from: -# https://pjb.com.au/ -# -# Copyright 2020 Peter Billam -# -################################################################################### -###################################################################################''' - -import sys, struct, copy -Version = '6.7' -VersionDate = '20201120' - -_previous_warning = '' # 5.4 -_previous_times = 0 # 5.4 -#------------------------------- Encoding stuff -------------------------- - -def opus2midi(opus=[], text_encoding='ISO-8859-1'): - r'''The argument is a list: the first item in the list is the "ticks" -parameter, the others are the tracks. Each track is a list -of midi-events, and each event is itself a list; see above. -opus2midi() returns a bytestring of the MIDI, which can then be -written either to a file opened in binary mode (mode='wb'), -or to stdout by means of: sys.stdout.buffer.write() - -my_opus = [ - 96, - [ # track 0: - ['patch_change', 0, 1, 8], # and these are the events... - ['note_on', 5, 1, 25, 96], - ['note_off', 96, 1, 25, 0], - ['note_on', 0, 1, 29, 96], - ['note_off', 96, 1, 29, 0], - ], # end of track 0 -] -my_midi = opus2midi(my_opus) -sys.stdout.buffer.write(my_midi) -''' - if len(opus) < 2: - opus=[1000, [],] - tracks = copy.deepcopy(opus) - ticks = int(tracks.pop(0)) - ntracks = len(tracks) - if ntracks == 1: - format = 0 - else: - format = 1 - - my_midi = b"MThd\x00\x00\x00\x06"+struct.pack('>HHH',format,ntracks,ticks) - for track in tracks: - events = _encode(track, text_encoding=text_encoding) - my_midi += b'MTrk' + struct.pack('>I',len(events)) + events - _clean_up_warnings() - return my_midi - - -def score2opus(score=None, text_encoding='ISO-8859-1'): - r''' -The argument is a list: the first item in the list is the "ticks" -parameter, the others are the tracks. Each track is a list -of score-events, and each event is itself a list. A score-event -is similar to an opus-event (see above), except that in a score: - 1) the times are expressed as an absolute number of ticks - from the track's start time - 2) the pairs of 'note_on' and 'note_off' events in an "opus" - are abstracted into a single 'note' event in a "score": - ['note', start_time, duration, channel, pitch, velocity] -score2opus() returns a list specifying the equivalent "opus". - -my_score = [ - 96, - [ # track 0: - ['patch_change', 0, 1, 8], - ['note', 5, 96, 1, 25, 96], - ['note', 101, 96, 1, 29, 96] - ], # end of track 0 -] -my_opus = score2opus(my_score) -''' - if len(score) < 2: - score=[1000, [],] - tracks = copy.deepcopy(score) - ticks = int(tracks.pop(0)) - opus_tracks = [] - for scoretrack in tracks: - time2events = dict([]) - for scoreevent in scoretrack: - if scoreevent[0] == 'note': - note_on_event = ['note_on',scoreevent[1], - scoreevent[3],scoreevent[4],scoreevent[5]] - note_off_event = ['note_off',scoreevent[1]+scoreevent[2], - scoreevent[3],scoreevent[4],scoreevent[5]] - if time2events.get(note_on_event[1]): - time2events[note_on_event[1]].append(note_on_event) - else: - time2events[note_on_event[1]] = [note_on_event,] - if time2events.get(note_off_event[1]): - time2events[note_off_event[1]].append(note_off_event) - else: - time2events[note_off_event[1]] = [note_off_event,] - continue - if time2events.get(scoreevent[1]): - time2events[scoreevent[1]].append(scoreevent) - else: - time2events[scoreevent[1]] = [scoreevent,] - - sorted_times = [] # list of keys - for k in time2events.keys(): - sorted_times.append(k) - sorted_times.sort() - - sorted_events = [] # once-flattened list of values sorted by key - for time in sorted_times: - sorted_events.extend(time2events[time]) - - abs_time = 0 - for event in sorted_events: # convert abs times => delta times - delta_time = event[1] - abs_time - abs_time = event[1] - event[1] = delta_time - opus_tracks.append(sorted_events) - opus_tracks.insert(0,ticks) - _clean_up_warnings() - return opus_tracks - -def score2midi(score=None, text_encoding='ISO-8859-1'): - r''' -Translates a "score" into MIDI, using score2opus() then opus2midi() -''' - return opus2midi(score2opus(score, text_encoding), text_encoding) - -#--------------------------- Decoding stuff ------------------------ - -def midi2opus(midi=b''): - r'''Translates MIDI into a "opus". For a description of the -"opus" format, see opus2midi() -''' - my_midi=bytearray(midi) - if len(my_midi) < 4: - _clean_up_warnings() - return [1000,[],] - id = bytes(my_midi[0:4]) - if id != b'MThd': - _warn("midi2opus: midi starts with "+str(id)+" instead of 'MThd'") - _clean_up_warnings() - return [1000,[],] - [length, format, tracks_expected, ticks] = struct.unpack( - '>IHHH', bytes(my_midi[4:14])) - if length != 6: - _warn("midi2opus: midi header length was "+str(length)+" instead of 6") - _clean_up_warnings() - return [1000,[],] - my_opus = [ticks,] - my_midi = my_midi[14:] - track_num = 1 # 5.1 - while len(my_midi) >= 8: - track_type = bytes(my_midi[0:4]) - if track_type != b'MTrk': - #_warn('midi2opus: Warning: track #'+str(track_num)+' type is '+str(track_type)+" instead of b'MTrk'") - pass - [track_length] = struct.unpack('>I', my_midi[4:8]) - my_midi = my_midi[8:] - if track_length > len(my_midi): - _warn('midi2opus: track #'+str(track_num)+' length '+str(track_length)+' is too large') - _clean_up_warnings() - return my_opus # 5.0 - my_midi_track = my_midi[0:track_length] - my_track = _decode(my_midi_track) - my_opus.append(my_track) - my_midi = my_midi[track_length:] - track_num += 1 # 5.1 - _clean_up_warnings() - return my_opus - -def opus2score(opus=[]): - r'''For a description of the "opus" and "score" formats, -see opus2midi() and score2opus(). -''' - if len(opus) < 2: - _clean_up_warnings() - return [1000,[],] - tracks = copy.deepcopy(opus) # couple of slices probably quicker... - ticks = int(tracks.pop(0)) - score = [ticks,] - for opus_track in tracks: - ticks_so_far = 0 - score_track = [] - chapitch2note_on_events = dict([]) # 4.0 - for opus_event in opus_track: - ticks_so_far += opus_event[1] - if opus_event[0] == 'note_off' or (opus_event[0] == 'note_on' and opus_event[4] == 0): # 4.8 - cha = opus_event[2] - pitch = opus_event[3] - key = cha*128 + pitch - if chapitch2note_on_events.get(key): - new_event = chapitch2note_on_events[key].pop(0) - new_event[2] = ticks_so_far - new_event[1] - score_track.append(new_event) - elif pitch > 127: - pass #_warn('opus2score: note_off with no note_on, bad pitch='+str(pitch)) - else: - pass #_warn('opus2score: note_off with no note_on cha='+str(cha)+' pitch='+str(pitch)) - elif opus_event[0] == 'note_on': - cha = opus_event[2] - pitch = opus_event[3] - key = cha*128 + pitch - new_event = ['note',ticks_so_far,0,cha,pitch, opus_event[4]] - if chapitch2note_on_events.get(key): - chapitch2note_on_events[key].append(new_event) - else: - chapitch2note_on_events[key] = [new_event,] - else: - opus_event[1] = ticks_so_far - score_track.append(opus_event) - # check for unterminated notes (Oisín) -- 5.2 - for chapitch in chapitch2note_on_events: - note_on_events = chapitch2note_on_events[chapitch] - for new_e in note_on_events: - new_e[2] = ticks_so_far - new_e[1] - score_track.append(new_e) - pass #_warn("opus2score: note_on with no note_off cha="+str(new_e[3])+' pitch='+str(new_e[4])+'; adding note_off at end') - score.append(score_track) - _clean_up_warnings() - return score - -def midi2score(midi=b''): - r''' -Translates MIDI into a "score", using midi2opus() then opus2score() -''' - return opus2score(midi2opus(midi)) - -def midi2ms_score(midi=b''): - r''' -Translates MIDI into a "score" with one beat per second and one -tick per millisecond, using midi2opus() then to_millisecs() -then opus2score() -''' - return opus2score(to_millisecs(midi2opus(midi))) - -#------------------------ Other Transformations --------------------- - -def to_millisecs(old_opus=None, desired_time_in_ms=1): - r'''Recallibrates all the times in an "opus" to use one beat -per second and one tick per millisecond. This makes it -hard to retrieve any information about beats or barlines, -but it does make it easy to mix different scores together. -''' - if old_opus == None: - return [1000 * desired_time_in_ms,[],] - try: - old_tpq = int(old_opus[0]) - except IndexError: # 5.0 - _warn('to_millisecs: the opus '+str(type(old_opus))+' has no elements') - return [1000 * desired_time_in_ms,[],] - new_opus = [1000 * desired_time_in_ms,] - # 6.7 first go through building a table of set_tempos by absolute-tick - ticks2tempo = {} - itrack = 1 - while itrack < len(old_opus): - ticks_so_far = 0 - for old_event in old_opus[itrack]: - if old_event[0] == 'note': - raise TypeError('to_millisecs needs an opus, not a score') - ticks_so_far += old_event[1] - if old_event[0] == 'set_tempo': - ticks2tempo[ticks_so_far] = old_event[2] - itrack += 1 - # then get the sorted-array of their keys - tempo_ticks = [] # list of keys - for k in ticks2tempo.keys(): - tempo_ticks.append(k) - tempo_ticks.sort() - # then go through converting to millisec, testing if the next - # set_tempo lies before the next track-event, and using it if so. - itrack = 1 - while itrack < len(old_opus): - ms_per_old_tick = 400 / old_tpq # float: will round later 6.3 - i_tempo_ticks = 0 - ticks_so_far = 0 - ms_so_far = 0.0 - previous_ms_so_far = 0.0 - new_track = [['set_tempo',0,1000000 * desired_time_in_ms],] # new "crochet" is 1 sec - for old_event in old_opus[itrack]: - # detect if ticks2tempo has something before this event - # 20160702 if ticks2tempo is at the same time, leave it - event_delta_ticks = old_event[1] * desired_time_in_ms - if (i_tempo_ticks < len(tempo_ticks) and - tempo_ticks[i_tempo_ticks] < (ticks_so_far + old_event[1]) * desired_time_in_ms): - delta_ticks = tempo_ticks[i_tempo_ticks] - ticks_so_far - ms_so_far += (ms_per_old_tick * delta_ticks * desired_time_in_ms) - ticks_so_far = tempo_ticks[i_tempo_ticks] - ms_per_old_tick = ticks2tempo[ticks_so_far] / (1000.0*old_tpq * desired_time_in_ms) - i_tempo_ticks += 1 - event_delta_ticks -= delta_ticks - new_event = copy.deepcopy(old_event) # now handle the new event - ms_so_far += (ms_per_old_tick * old_event[1] * desired_time_in_ms) - new_event[1] = round(ms_so_far - previous_ms_so_far) - if old_event[0] != 'set_tempo': - previous_ms_so_far = ms_so_far - new_track.append(new_event) - ticks_so_far += event_delta_ticks - new_opus.append(new_track) - itrack += 1 - _clean_up_warnings() - return new_opus - -def event2alsaseq(event=None): # 5.5 - r'''Converts an event into the format needed by the alsaseq module, -http://pp.com.mx/python/alsaseq -The type of track (opus or score) is autodetected. -''' - pass - -def grep(score=None, channels=None): - r'''Returns a "score" containing only the channels specified -''' - if score == None: - return [1000,[],] - ticks = score[0] - new_score = [ticks,] - if channels == None: - return new_score - channels = set(channels) - global Event2channelindex - itrack = 1 - while itrack < len(score): - new_score.append([]) - for event in score[itrack]: - channel_index = Event2channelindex.get(event[0], False) - if channel_index: - if event[channel_index] in channels: - new_score[itrack].append(event) - else: - new_score[itrack].append(event) - itrack += 1 - return new_score - -def play_score(score=None): - r'''Converts the "score" to midi, and feeds it into 'aplaymidi -' -''' - if score == None: - return - import subprocess - pipe = subprocess.Popen(['aplaymidi','-'], stdin=subprocess.PIPE) - if score_type(score) == 'opus': - pipe.stdin.write(opus2midi(score)) - else: - pipe.stdin.write(score2midi(score)) - pipe.stdin.close() - -def score2stats(opus_or_score=None): - r'''Returns a dict of some basic stats about the score, like -bank_select (list of tuples (msb,lsb)), -channels_by_track (list of lists), channels_total (set), -general_midi_mode (list), -ntracks, nticks, patch_changes_by_track (list of dicts), -num_notes_by_channel (list of numbers), -patch_changes_total (set), -percussion (dict histogram of channel 9 events), -pitches (dict histogram of pitches on channels other than 9), -pitch_range_by_track (list, by track, of two-member-tuples), -pitch_range_sum (sum over tracks of the pitch_ranges), -''' - bank_select_msb = -1 - bank_select_lsb = -1 - bank_select = [] - channels_by_track = [] - channels_total = set([]) - general_midi_mode = [] - num_notes_by_channel = dict([]) - patches_used_by_track = [] - patches_used_total = set([]) - patch_changes_by_track = [] - patch_changes_total = set([]) - percussion = dict([]) # histogram of channel 9 "pitches" - pitches = dict([]) # histogram of pitch-occurrences channels 0-8,10-15 - pitch_range_sum = 0 # u pitch-ranges of each track - pitch_range_by_track = [] - is_a_score = True - if opus_or_score == None: - return {'bank_select':[], 'channels_by_track':[], 'channels_total':[], - 'general_midi_mode':[], 'ntracks':0, 'nticks':0, - 'num_notes_by_channel':dict([]), - 'patch_changes_by_track':[], 'patch_changes_total':[], - 'percussion':{}, 'pitches':{}, 'pitch_range_by_track':[], - 'ticks_per_quarter':0, 'pitch_range_sum':0} - ticks_per_quarter = opus_or_score[0] - i = 1 # ignore first element, which is ticks - nticks = 0 - while i < len(opus_or_score): - highest_pitch = 0 - lowest_pitch = 128 - channels_this_track = set([]) - patch_changes_this_track = dict({}) - for event in opus_or_score[i]: - if event[0] == 'note': - num_notes_by_channel[event[3]] = num_notes_by_channel.get(event[3],0) + 1 - if event[3] == 9: - percussion[event[4]] = percussion.get(event[4],0) + 1 - else: - pitches[event[4]] = pitches.get(event[4],0) + 1 - if event[4] > highest_pitch: - highest_pitch = event[4] - if event[4] < lowest_pitch: - lowest_pitch = event[4] - channels_this_track.add(event[3]) - channels_total.add(event[3]) - finish_time = event[1] + event[2] - if finish_time > nticks: - nticks = finish_time - elif event[0] == 'note_off' or (event[0] == 'note_on' and event[4] == 0): # 4.8 - finish_time = event[1] - if finish_time > nticks: - nticks = finish_time - elif event[0] == 'note_on': - is_a_score = False - num_notes_by_channel[event[2]] = num_notes_by_channel.get(event[2],0) + 1 - if event[2] == 9: - percussion[event[3]] = percussion.get(event[3],0) + 1 - else: - pitches[event[3]] = pitches.get(event[3],0) + 1 - if event[3] > highest_pitch: - highest_pitch = event[3] - if event[3] < lowest_pitch: - lowest_pitch = event[3] - channels_this_track.add(event[2]) - channels_total.add(event[2]) - elif event[0] == 'patch_change': - patch_changes_this_track[event[2]] = event[3] - patch_changes_total.add(event[3]) - elif event[0] == 'control_change': - if event[3] == 0: # bank select MSB - bank_select_msb = event[4] - elif event[3] == 32: # bank select LSB - bank_select_lsb = event[4] - if bank_select_msb >= 0 and bank_select_lsb >= 0: - bank_select.append((bank_select_msb,bank_select_lsb)) - bank_select_msb = -1 - bank_select_lsb = -1 - elif event[0] == 'sysex_f0': - if _sysex2midimode.get(event[2], -1) >= 0: - general_midi_mode.append(_sysex2midimode.get(event[2])) - if is_a_score: - if event[1] > nticks: - nticks = event[1] - else: - nticks += event[1] - if lowest_pitch == 128: - lowest_pitch = 0 - channels_by_track.append(channels_this_track) - patch_changes_by_track.append(patch_changes_this_track) - pitch_range_by_track.append((lowest_pitch,highest_pitch)) - pitch_range_sum += (highest_pitch-lowest_pitch) - i += 1 - - return {'bank_select':bank_select, - 'channels_by_track':channels_by_track, - 'channels_total':channels_total, - 'general_midi_mode':general_midi_mode, - 'ntracks':len(opus_or_score)-1, - 'nticks':nticks, - 'num_notes_by_channel':num_notes_by_channel, - 'patch_changes_by_track':patch_changes_by_track, - 'patch_changes_total':patch_changes_total, - 'percussion':percussion, - 'pitches':pitches, - 'pitch_range_by_track':pitch_range_by_track, - 'pitch_range_sum':pitch_range_sum, - 'ticks_per_quarter':ticks_per_quarter} - -#----------------------------- Event stuff -------------------------- - -_sysex2midimode = { - "\x7E\x7F\x09\x01\xF7": 1, - "\x7E\x7F\x09\x02\xF7": 0, - "\x7E\x7F\x09\x03\xF7": 2, -} - -# Some public-access tuples: -MIDI_events = tuple('''note_off note_on key_after_touch -control_change patch_change channel_after_touch -pitch_wheel_change'''.split()) - -Text_events = tuple('''text_event copyright_text_event -track_name instrument_name lyric marker cue_point text_event_08 -text_event_09 text_event_0a text_event_0b text_event_0c -text_event_0d text_event_0e text_event_0f'''.split()) - -Nontext_meta_events = tuple('''end_track set_tempo -smpte_offset time_signature key_signature sequencer_specific -raw_meta_event sysex_f0 sysex_f7 song_position song_select -tune_request'''.split()) -# unsupported: raw_data - -# Actually, 'tune_request' is is F-series event, not strictly a meta-event... -Meta_events = Text_events + Nontext_meta_events -All_events = MIDI_events + Meta_events - -# And three dictionaries: -Number2patch = { # General MIDI patch numbers: -0:'Acoustic Grand', -1:'Bright Acoustic', -2:'Electric Grand', -3:'Honky-Tonk', -4:'Electric Piano 1', -5:'Electric Piano 2', -6:'Harpsichord', -7:'Clav', -8:'Celesta', -9:'Glockenspiel', -10:'Music Box', -11:'Vibraphone', -12:'Marimba', -13:'Xylophone', -14:'Tubular Bells', -15:'Dulcimer', -16:'Drawbar Organ', -17:'Percussive Organ', -18:'Rock Organ', -19:'Church Organ', -20:'Reed Organ', -21:'Accordion', -22:'Harmonica', -23:'Tango Accordion', -24:'Acoustic Guitar(nylon)', -25:'Acoustic Guitar(steel)', -26:'Electric Guitar(jazz)', -27:'Electric Guitar(clean)', -28:'Electric Guitar(muted)', -29:'Overdriven Guitar', -30:'Distortion Guitar', -31:'Guitar Harmonics', -32:'Acoustic Bass', -33:'Electric Bass(finger)', -34:'Electric Bass(pick)', -35:'Fretless Bass', -36:'Slap Bass 1', -37:'Slap Bass 2', -38:'Synth Bass 1', -39:'Synth Bass 2', -40:'Violin', -41:'Viola', -42:'Cello', -43:'Contrabass', -44:'Tremolo Strings', -45:'Pizzicato Strings', -46:'Orchestral Harp', -47:'Timpani', -48:'String Ensemble 1', -49:'String Ensemble 2', -50:'SynthStrings 1', -51:'SynthStrings 2', -52:'Choir Aahs', -53:'Voice Oohs', -54:'Synth Voice', -55:'Orchestra Hit', -56:'Trumpet', -57:'Trombone', -58:'Tuba', -59:'Muted Trumpet', -60:'French Horn', -61:'Brass Section', -62:'SynthBrass 1', -63:'SynthBrass 2', -64:'Soprano Sax', -65:'Alto Sax', -66:'Tenor Sax', -67:'Baritone Sax', -68:'Oboe', -69:'English Horn', -70:'Bassoon', -71:'Clarinet', -72:'Piccolo', -73:'Flute', -74:'Recorder', -75:'Pan Flute', -76:'Blown Bottle', -77:'Skakuhachi', -78:'Whistle', -79:'Ocarina', -80:'Lead 1 (square)', -81:'Lead 2 (sawtooth)', -82:'Lead 3 (calliope)', -83:'Lead 4 (chiff)', -84:'Lead 5 (charang)', -85:'Lead 6 (voice)', -86:'Lead 7 (fifths)', -87:'Lead 8 (bass+lead)', -88:'Pad 1 (new age)', -89:'Pad 2 (warm)', -90:'Pad 3 (polysynth)', -91:'Pad 4 (choir)', -92:'Pad 5 (bowed)', -93:'Pad 6 (metallic)', -94:'Pad 7 (halo)', -95:'Pad 8 (sweep)', -96:'FX 1 (rain)', -97:'FX 2 (soundtrack)', -98:'FX 3 (crystal)', -99:'FX 4 (atmosphere)', -100:'FX 5 (brightness)', -101:'FX 6 (goblins)', -102:'FX 7 (echoes)', -103:'FX 8 (sci-fi)', -104:'Sitar', -105:'Banjo', -106:'Shamisen', -107:'Koto', -108:'Kalimba', -109:'Bagpipe', -110:'Fiddle', -111:'Shanai', -112:'Tinkle Bell', -113:'Agogo', -114:'Steel Drums', -115:'Woodblock', -116:'Taiko Drum', -117:'Melodic Tom', -118:'Synth Drum', -119:'Reverse Cymbal', -120:'Guitar Fret Noise', -121:'Breath Noise', -122:'Seashore', -123:'Bird Tweet', -124:'Telephone Ring', -125:'Helicopter', -126:'Applause', -127:'Gunshot', -} -Notenum2percussion = { # General MIDI Percussion (on Channel 9): -35:'Acoustic Bass Drum', -36:'Bass Drum 1', -37:'Side Stick', -38:'Acoustic Snare', -39:'Hand Clap', -40:'Electric Snare', -41:'Low Floor Tom', -42:'Closed Hi-Hat', -43:'High Floor Tom', -44:'Pedal Hi-Hat', -45:'Low Tom', -46:'Open Hi-Hat', -47:'Low-Mid Tom', -48:'Hi-Mid Tom', -49:'Crash Cymbal 1', -50:'High Tom', -51:'Ride Cymbal 1', -52:'Chinese Cymbal', -53:'Ride Bell', -54:'Tambourine', -55:'Splash Cymbal', -56:'Cowbell', -57:'Crash Cymbal 2', -58:'Vibraslap', -59:'Ride Cymbal 2', -60:'Hi Bongo', -61:'Low Bongo', -62:'Mute Hi Conga', -63:'Open Hi Conga', -64:'Low Conga', -65:'High Timbale', -66:'Low Timbale', -67:'High Agogo', -68:'Low Agogo', -69:'Cabasa', -70:'Maracas', -71:'Short Whistle', -72:'Long Whistle', -73:'Short Guiro', -74:'Long Guiro', -75:'Claves', -76:'Hi Wood Block', -77:'Low Wood Block', -78:'Mute Cuica', -79:'Open Cuica', -80:'Mute Triangle', -81:'Open Triangle', -} - -Event2channelindex = { 'note':3, 'note_off':2, 'note_on':2, - 'key_after_touch':2, 'control_change':2, 'patch_change':2, - 'channel_after_touch':2, 'pitch_wheel_change':2 -} - -################################################################ -# The code below this line is full of frightening things, all to -# do with the actual encoding and decoding of binary MIDI data. - -def _twobytes2int(byte_a): - r'''decode a 16 bit quantity from two bytes,''' - return (byte_a[1] | (byte_a[0] << 8)) - -def _int2twobytes(int_16bit): - r'''encode a 16 bit quantity into two bytes,''' - return bytes([(int_16bit>>8) & 0xFF, int_16bit & 0xFF]) - -def _read_14_bit(byte_a): - r'''decode a 14 bit quantity from two bytes,''' - return (byte_a[0] | (byte_a[1] << 7)) - -def _write_14_bit(int_14bit): - r'''encode a 14 bit quantity into two bytes,''' - return bytes([int_14bit & 0x7F, (int_14bit>>7) & 0x7F]) - -def _ber_compressed_int(integer): - r'''BER compressed integer (not an ASN.1 BER, see perlpacktut for -details). Its bytes represent an unsigned integer in base 128, -most significant digit first, with as few digits as possible. -Bit eight (the high bit) is set on each byte except the last. -''' - ber = bytearray(b'') - seven_bits = 0x7F & integer - ber.insert(0, seven_bits) # XXX surely should convert to a char ? - integer >>= 7 - while integer > 0: - seven_bits = 0x7F & integer - ber.insert(0, 0x80|seven_bits) # XXX surely should convert to a char ? - integer >>= 7 - return ber - -def _unshift_ber_int(ba): - r'''Given a bytearray, returns a tuple of (the ber-integer at the -start, and the remainder of the bytearray). -''' - if not len(ba): # 6.7 - _warn('_unshift_ber_int: no integer found') - return ((0, b"")) - byte = ba.pop(0) - integer = 0 - while True: - integer += (byte & 0x7F) - if not (byte & 0x80): - return ((integer, ba)) - if not len(ba): - _warn('_unshift_ber_int: no end-of-integer found') - return ((0, ba)) - byte = ba.pop(0) - integer <<= 7 - -def _clean_up_warnings(): # 5.4 - # Call this before returning from any publicly callable function - # whenever there's a possibility that a warning might have been printed - # by the function, or by any private functions it might have called. - global _previous_times - global _previous_warning - if _previous_times > 1: - # E:1176, 0: invalid syntax (<string>, line 1176) (syntax-error) ??? - # print(' previous message repeated '+str(_previous_times)+' times', file=sys.stderr) - # 6.7 - sys.stderr.write(' previous message repeated {0} times\n'.format(_previous_times)) - elif _previous_times > 0: - sys.stderr.write(' previous message repeated\n') - _previous_times = 0 - _previous_warning = '' - -def _warn(s=''): - global _previous_times - global _previous_warning - if s == _previous_warning: # 5.4 - _previous_times = _previous_times + 1 - else: - _clean_up_warnings() - sys.stderr.write(str(s)+"\n") - _previous_warning = s - -def _some_text_event(which_kind=0x01, text=b'some_text', text_encoding='ISO-8859-1'): - if str(type(text)).find("'str'") >= 0: # 6.4 test for back-compatibility - data = bytes(text, encoding=text_encoding) - else: - data = bytes(text) - return b'\xFF'+bytes((which_kind,))+_ber_compressed_int(len(data))+data - -def _consistentise_ticks(scores): # 3.6 - # used by mix_scores, merge_scores, concatenate_scores - if len(scores) == 1: - return copy.deepcopy(scores) - are_consistent = True - ticks = scores[0][0] - iscore = 1 - while iscore < len(scores): - if scores[iscore][0] != ticks: - are_consistent = False - break - iscore += 1 - if are_consistent: - return copy.deepcopy(scores) - new_scores = [] - iscore = 0 - while iscore < len(scores): - score = scores[iscore] - new_scores.append(opus2score(to_millisecs(score2opus(score)))) - iscore += 1 - return new_scores - - -########################################################################### - -def _decode(trackdata=b'', exclude=None, include=None, - event_callback=None, exclusive_event_callback=None, no_eot_magic=False): - r'''Decodes MIDI track data into an opus-style list of events. -The options: - 'exclude' is a list of event types which will be ignored SHOULD BE A SET - 'include' (and no exclude), makes exclude a list - of all possible events, /minus/ what include specifies - 'event_callback' is a coderef - 'exclusive_event_callback' is a coderef -''' - trackdata = bytearray(trackdata) - if exclude == None: - exclude = [] - if include == None: - include = [] - if include and not exclude: - exclude = All_events - include = set(include) - exclude = set(exclude) - - # Pointer = 0; not used here; we eat through the bytearray instead. - event_code = -1; # used for running status - event_count = 0; - events = [] - - while(len(trackdata)): - # loop while there's anything to analyze ... - eot = False # When True, the event registrar aborts this loop - event_count += 1 - - E = [] - # E for events - we'll feed it to the event registrar at the end. - - # Slice off the delta time code, and analyze it - [time, remainder] = _unshift_ber_int(trackdata) - - # Now let's see what we can make of the command - first_byte = trackdata.pop(0) & 0xFF - - if (first_byte < 0xF0): # It's a MIDI event - if (first_byte & 0x80): - event_code = first_byte - else: - # It wants running status; use last event_code value - trackdata.insert(0, first_byte) - if (event_code == -1): - _warn("Running status not set; Aborting track.") - return [] - - command = event_code & 0xF0 - channel = event_code & 0x0F - - if (command == 0xF6): # 0-byte argument - pass - elif (command == 0xC0 or command == 0xD0): # 1-byte argument - parameter = trackdata.pop(0) # could be B - else: # 2-byte argument could be BB or 14-bit - parameter = (trackdata.pop(0), trackdata.pop(0)) - - ################################################################# - # MIDI events - - if (command == 0x80): - if 'note_off' in exclude: - continue - E = ['note_off', time, channel, parameter[0], parameter[1]] - elif (command == 0x90): - if 'note_on' in exclude: - continue - E = ['note_on', time, channel, parameter[0], parameter[1]] - elif (command == 0xA0): - if 'key_after_touch' in exclude: - continue - E = ['key_after_touch',time,channel,parameter[0],parameter[1]] - elif (command == 0xB0): - if 'control_change' in exclude: - continue - E = ['control_change',time,channel,parameter[0],parameter[1]] - elif (command == 0xC0): - if 'patch_change' in exclude: - continue - E = ['patch_change', time, channel, parameter] - elif (command == 0xD0): - if 'channel_after_touch' in exclude: - continue - E = ['channel_after_touch', time, channel, parameter] - elif (command == 0xE0): - if 'pitch_wheel_change' in exclude: - continue - E = ['pitch_wheel_change', time, channel, - _read_14_bit(parameter)-0x2000] - else: - _warn("Shouldn't get here; command="+hex(command)) - - elif (first_byte == 0xFF): # It's a Meta-Event! ################## - #[command, length, remainder] = - # unpack("xCwa*", substr(trackdata, $Pointer, 6)); - #Pointer += 6 - len(remainder); - # # Move past JUST the length-encoded. - command = trackdata.pop(0) & 0xFF - [length, trackdata] = _unshift_ber_int(trackdata) - if (command == 0x00): - if (length == 2): - E = ['set_sequence_number',time,_twobytes2int(trackdata)] - else: - _warn('set_sequence_number: length must be 2, not '+str(length)) - E = ['set_sequence_number', time, 0] - - elif command >= 0x01 and command <= 0x0f: # Text events - # 6.2 take it in bytes; let the user get the right encoding. - # text_str = trackdata[0:length].decode('ascii','ignore') - # text_str = trackdata[0:length].decode('ISO-8859-1') - # 6.4 take it in bytes; let the user get the right encoding. - text_data = bytes(trackdata[0:length]) # 6.4 - # Defined text events - if (command == 0x01): - E = ['text_event', time, text_data] - elif (command == 0x02): - E = ['copyright_text_event', time, text_data] - elif (command == 0x03): - E = ['track_name', time, text_data] - elif (command == 0x04): - E = ['instrument_name', time, text_data] - elif (command == 0x05): - E = ['lyric', time, text_data] - elif (command == 0x06): - E = ['marker', time, text_data] - elif (command == 0x07): - E = ['cue_point', time, text_data] - # Reserved but apparently unassigned text events - elif (command == 0x08): - E = ['text_event_08', time, text_data] - elif (command == 0x09): - E = ['text_event_09', time, text_data] - elif (command == 0x0a): - E = ['text_event_0a', time, text_data] - elif (command == 0x0b): - E = ['text_event_0b', time, text_data] - elif (command == 0x0c): - E = ['text_event_0c', time, text_data] - elif (command == 0x0d): - E = ['text_event_0d', time, text_data] - elif (command == 0x0e): - E = ['text_event_0e', time, text_data] - elif (command == 0x0f): - E = ['text_event_0f', time, text_data] - - # Now the sticky events ------------------------------------- - elif (command == 0x2F): - E = ['end_track', time] - # The code for handling this, oddly, comes LATER, - # in the event registrar. - elif (command == 0x51): # DTime, Microseconds/Crochet - if length != 3: - _warn('set_tempo event, but length='+str(length)) - E = ['set_tempo', time, - struct.unpack(">I", b'\x00'+trackdata[0:3])[0]] - elif (command == 0x54): - if length != 5: # DTime, HR, MN, SE, FR, FF - _warn('smpte_offset event, but length='+str(length)) - E = ['smpte_offset',time] + list(struct.unpack(">BBBBB",trackdata[0:5])) - elif (command == 0x58): - if length != 4: # DTime, NN, DD, CC, BB - _warn('time_signature event, but length='+str(length)) - E = ['time_signature', time]+list(trackdata[0:4]) - elif (command == 0x59): - if length != 2: # DTime, SF(signed), MI - _warn('key_signature event, but length='+str(length)) - E = ['key_signature',time] + list(struct.unpack(">bB",trackdata[0:2])) - elif (command == 0x7F): # 6.4 - E = ['sequencer_specific',time, bytes(trackdata[0:length])] - else: - E = ['raw_meta_event', time, command, - bytes(trackdata[0:length])] # 6.0 - #"[uninterpretable meta-event command of length length]" - # DTime, Command, Binary Data - # It's uninterpretable; record it as raw_data. - - # Pointer += length; # Now move Pointer - trackdata = trackdata[length:] - - ###################################################################### - elif (first_byte == 0xF0 or first_byte == 0xF7): - # Note that sysexes in MIDI /files/ are different than sysexes - # in MIDI transmissions!! The vast majority of system exclusive - # messages will just use the F0 format. For instance, the - # transmitted message F0 43 12 00 07 F7 would be stored in a - # MIDI file as F0 05 43 12 00 07 F7. As mentioned above, it is - # required to include the F7 at the end so that the reader of the - # MIDI file knows that it has read the entire message. (But the F7 - # is omitted if this is a non-final block in a multiblock sysex; - # but the F7 (if there) is counted in the message's declared - # length, so we don't have to think about it anyway.) - #command = trackdata.pop(0) - [length, trackdata] = _unshift_ber_int(trackdata) - if first_byte == 0xF0: - # 20091008 added ISO-8859-1 to get an 8-bit str - # 6.4 return bytes instead - E = ['sysex_f0', time, bytes(trackdata[0:length])] - else: - E = ['sysex_f7', time, bytes(trackdata[0:length])] - trackdata = trackdata[length:] - - ###################################################################### - # Now, the MIDI file spec says: - # <track data> = <MTrk event>+ - # <MTrk event> = <delta-time> <event> - # <event> = <MIDI event> | <sysex event> | <meta-event> - # I know that, on the wire, <MIDI event> can include note_on, - # note_off, and all the other 8x to Ex events, AND Fx events - # other than F0, F7, and FF -- namely, <song position msg>, - # <song select msg>, and <tune request>. - # - # Whether these can occur in MIDI files is not clear specified - # from the MIDI file spec. So, I'm going to assume that - # they CAN, in practice, occur. I don't know whether it's - # proper for you to actually emit these into a MIDI file. - - elif (first_byte == 0xF2): # DTime, Beats - # <song position msg> ::= F2 <data pair> - E = ['song_position', time, _read_14_bit(trackdata[:2])] - trackdata = trackdata[2:] - - elif (first_byte == 0xF3): # <song select msg> ::= F3 <data singlet> - # E = ['song_select', time, struct.unpack('>B',trackdata.pop(0))[0]] - E = ['song_select', time, trackdata[0]] - trackdata = trackdata[1:] - # DTime, Thing (what?! song number? whatever ...) - - elif (first_byte == 0xF6): # DTime - E = ['tune_request', time] - # What would a tune request be doing in a MIDI /file/? - - ######################################################### - # ADD MORE META-EVENTS HERE. TODO: - # f1 -- MTC Quarter Frame Message. One data byte follows - # the Status; it's the time code value, from 0 to 127. - # f8 -- MIDI clock. no data. - # fa -- MIDI start. no data. - # fb -- MIDI continue. no data. - # fc -- MIDI stop. no data. - # fe -- Active sense. no data. - # f4 f5 f9 fd -- unallocated - - r''' - elif (first_byte > 0xF0) { # Some unknown kinda F-series event #### - # Here we only produce a one-byte piece of raw data. - # But the encoder for 'raw_data' accepts any length of it. - E = [ 'raw_data', - time, substr(trackdata,Pointer,1) ] - # DTime and the Data (in this case, the one Event-byte) - ++Pointer; # itself - -''' - elif first_byte > 0xF0: # Some unknown F-series event - # Here we only produce a one-byte piece of raw data. - # E = ['raw_data', time, bytest(trackdata[0])] # 6.4 - E = ['raw_data', time, trackdata[0]] # 6.4 6.7 - trackdata = trackdata[1:] - else: # Fallthru. - _warn("Aborting track. Command-byte first_byte="+hex(first_byte)) - break - # End of the big if-group - - - ###################################################################### - # THE EVENT REGISTRAR... - if E and (E[0] == 'end_track'): - # This is the code for exceptional handling of the EOT event. - eot = True - if not no_eot_magic: - if E[1] > 0: # a null text-event to carry the delta-time - E = ['text_event', E[1], ''] - else: - E = [] # EOT with a delta-time of 0; ignore it. - - if E and not (E[0] in exclude): - #if ( $exclusive_event_callback ): - # &{ $exclusive_event_callback }( @E ); - #else: - # &{ $event_callback }( @E ) if $event_callback; - events.append(E) - if eot: - break - - # End of the big "Event" while-block - - return events - - -########################################################################### -def _encode(events_lol, unknown_callback=None, never_add_eot=False, - no_eot_magic=False, no_running_status=False, text_encoding='ISO-8859-1'): - # encode an event structure, presumably for writing to a file - # Calling format: - # $data_r = MIDI::Event::encode( \@event_lol, { options } ); - # Takes a REFERENCE to an event structure (a LoL) - # Returns an (unblessed) REFERENCE to track data. - - # If you want to use this to encode a /single/ event, - # you still have to do it as a reference to an event structure (a LoL) - # that just happens to have just one event. I.e., - # encode( [ $event ] ) or encode( [ [ 'note_on', 100, 5, 42, 64] ] ) - # If you're doing this, consider the never_add_eot track option, as in - # print MIDI ${ encode( [ $event], { 'never_add_eot' => 1} ) }; - - data = [] # what I'll store the chunks of byte-data in - - # This is so my end_track magic won't corrupt the original - events = copy.deepcopy(events_lol) - - if not never_add_eot: - # One way or another, tack on an 'end_track' - if events: - last = events[-1] - if not (last[0] == 'end_track'): # no end_track already - if (last[0] == 'text_event' and len(last[2]) == 0): - # 0-length text event at track-end. - if no_eot_magic: - # Exceptional case: don't mess with track-final - # 0-length text_events; just peg on an end_track - events.append(['end_track', 0]) - else: - # NORMAL CASE: replace with an end_track, leaving DTime - last[0] = 'end_track' - else: - # last event was neither 0-length text_event nor end_track - events.append(['end_track', 0]) - else: # an eventless track! - events = [['end_track', 0],] - - # maybe_running_status = not no_running_status # unused? 4.7 - last_status = -1 - - for event_r in (events): - E = copy.deepcopy(event_r) - # otherwise the shifting'd corrupt the original - if not E: - continue - - event = E.pop(0) - if not len(event): - continue - - dtime = int(E.pop(0)) - # print('event='+str(event)+' dtime='+str(dtime)) - - event_data = '' - - if ( # MIDI events -- eligible for running status - event == 'note_on' - or event == 'note_off' - or event == 'control_change' - or event == 'key_after_touch' - or event == 'patch_change' - or event == 'channel_after_touch' - or event == 'pitch_wheel_change' ): - - # This block is where we spend most of the time. Gotta be tight. - if (event == 'note_off'): - status = 0x80 | (int(E[0]) & 0x0F) - parameters = struct.pack('>BB', int(E[1])&0x7F, int(E[2])&0x7F) - elif (event == 'note_on'): - status = 0x90 | (int(E[0]) & 0x0F) - parameters = struct.pack('>BB', int(E[1])&0x7F, int(E[2])&0x7F) - elif (event == 'key_after_touch'): - status = 0xA0 | (int(E[0]) & 0x0F) - parameters = struct.pack('>BB', int(E[1])&0x7F, int(E[2])&0x7F) - elif (event == 'control_change'): - status = 0xB0 | (int(E[0]) & 0x0F) - parameters = struct.pack('>BB', int(E[1])&0xFF, int(E[2])&0xFF) - elif (event == 'patch_change'): - status = 0xC0 | (int(E[0]) & 0x0F) - parameters = struct.pack('>B', int(E[1]) & 0xFF) - elif (event == 'channel_after_touch'): - status = 0xD0 | (int(E[0]) & 0x0F) - parameters = struct.pack('>B', int(E[1]) & 0xFF) - elif (event == 'pitch_wheel_change'): - status = 0xE0 | (int(E[0]) & 0x0F) - parameters = _write_14_bit(int(E[1]) + 0x2000) - else: - _warn("BADASS FREAKOUT ERROR 31415!") - - # And now the encoding - # w = BER compressed integer (not ASN.1 BER, see perlpacktut for - # details). Its bytes represent an unsigned integer in base 128, - # most significant digit first, with as few digits as possible. - # Bit eight (the high bit) is set on each byte except the last. - - data.append(_ber_compressed_int(dtime)) - if (status != last_status) or no_running_status: - data.append(struct.pack('>B', status)) - data.append(parameters) - - last_status = status - continue - else: - # Not a MIDI event. - # All the code in this block could be more efficient, - # but this is not where the code needs to be tight. - # print "zaz $event\n"; - last_status = -1 - - if event == 'raw_meta_event': - event_data = _some_text_event(int(E[0]), E[1], text_encoding) - elif (event == 'set_sequence_number'): # 3.9 - event_data = b'\xFF\x00\x02'+_int2twobytes(E[0]) - - # Text meta-events... - # a case for a dict, I think (pjb) ... - elif (event == 'text_event'): - event_data = _some_text_event(0x01, E[0], text_encoding) - elif (event == 'copyright_text_event'): - event_data = _some_text_event(0x02, E[0], text_encoding) - elif (event == 'track_name'): - event_data = _some_text_event(0x03, E[0], text_encoding) - elif (event == 'instrument_name'): - event_data = _some_text_event(0x04, E[0], text_encoding) - elif (event == 'lyric'): - event_data = _some_text_event(0x05, E[0], text_encoding) - elif (event == 'marker'): - event_data = _some_text_event(0x06, E[0], text_encoding) - elif (event == 'cue_point'): - event_data = _some_text_event(0x07, E[0], text_encoding) - elif (event == 'text_event_08'): - event_data = _some_text_event(0x08, E[0], text_encoding) - elif (event == 'text_event_09'): - event_data = _some_text_event(0x09, E[0], text_encoding) - elif (event == 'text_event_0a'): - event_data = _some_text_event(0x0A, E[0], text_encoding) - elif (event == 'text_event_0b'): - event_data = _some_text_event(0x0B, E[0], text_encoding) - elif (event == 'text_event_0c'): - event_data = _some_text_event(0x0C, E[0], text_encoding) - elif (event == 'text_event_0d'): - event_data = _some_text_event(0x0D, E[0], text_encoding) - elif (event == 'text_event_0e'): - event_data = _some_text_event(0x0E, E[0], text_encoding) - elif (event == 'text_event_0f'): - event_data = _some_text_event(0x0F, E[0], text_encoding) - # End of text meta-events - - elif (event == 'end_track'): - event_data = b"\xFF\x2F\x00" - - elif (event == 'set_tempo'): - #event_data = struct.pack(">BBwa*", 0xFF, 0x51, 3, - # substr( struct.pack('>I', E[0]), 1, 3)) - event_data = b'\xFF\x51\x03'+struct.pack('>I',E[0])[1:] - elif (event == 'smpte_offset'): - # event_data = struct.pack(">BBwBBBBB", 0xFF, 0x54, 5, E[0:5] ) - event_data = struct.pack(">BBBbBBBB", 0xFF,0x54,0x05,E[0],E[1],E[2],E[3],E[4]) - elif (event == 'time_signature'): - # event_data = struct.pack(">BBwBBBB", 0xFF, 0x58, 4, E[0:4] ) - event_data = struct.pack(">BBBbBBB", 0xFF, 0x58, 0x04, E[0],E[1],E[2],E[3]) - elif (event == 'key_signature'): - event_data = struct.pack(">BBBbB", 0xFF, 0x59, 0x02, E[0],E[1]) - elif (event == 'sequencer_specific'): - # event_data = struct.pack(">BBwa*", 0xFF,0x7F, len(E[0]), E[0]) - event_data = _some_text_event(0x7F, E[0], text_encoding) - # End of Meta-events - - # Other Things... - elif (event == 'sysex_f0'): - #event_data = struct.pack(">Bwa*", 0xF0, len(E[0]), E[0]) - #B=bitstring w=BER-compressed-integer a=null-padded-ascii-str - event_data = bytearray(b'\xF0')+_ber_compressed_int(len(E[0]))+bytearray(E[0]) - elif (event == 'sysex_f7'): - #event_data = struct.pack(">Bwa*", 0xF7, len(E[0]), E[0]) - event_data = bytearray(b'\xF7')+_ber_compressed_int(len(E[0]))+bytearray(E[0]) - - elif (event == 'song_position'): - event_data = b"\xF2" + _write_14_bit( E[0] ) - elif (event == 'song_select'): - event_data = struct.pack('>BB', 0xF3, E[0] ) - elif (event == 'tune_request'): - event_data = b"\xF6" - elif (event == 'raw_data'): - _warn("_encode: raw_data event not supported") - # event_data = E[0] - continue - # End of Other Stuff - - else: - # The Big Fallthru - if unknown_callback: - # push(@data, &{ $unknown_callback }( @$event_r )) - pass - else: - _warn("Unknown event: "+str(event)) - # To surpress complaint here, just set - # 'unknown_callback' => sub { return () } - continue - - #print "Event $event encoded part 2\n" - if str(type(event_data)).find("'str'") >= 0: - event_data = bytearray(event_data.encode('Latin1', 'ignore')) - if len(event_data): # how could $event_data be empty - # data.append(struct.pack('>wa*', dtime, event_data)) - # print(' event_data='+str(event_data)) - data.append(_ber_compressed_int(dtime)+event_data) - - return b''.join(data) - -################################################################################### -################################################################################### -################################################################################### -# -# Tegridy MIDI X Module (TMIDI X / tee-midi eks) -# Version 1.0 -# -# Based upon and includes the amazing MIDI.py module v.6.7. by Peter Billam -# pjb.com.au -# -# Project Los Angeles -# Tegridy Code 2021 -# https://github.com/Tegridy-Code/Project-Los-Angeles -# -################################################################################### -################################################################################### -################################################################################### - -import os - -import datetime - -import copy - -from datetime import datetime - -import secrets - -import random - -import pickle - -import csv - -import tqdm - -from itertools import zip_longest -from itertools import groupby - -from operator import itemgetter - -import sys - -from abc import ABC, abstractmethod - -from difflib import SequenceMatcher as SM - -import statistics - -################################################################################### -# -# Original TMIDI Tegridy helper functions -# -################################################################################### - -def Tegridy_TXT_to_INT_Converter(input_TXT_string, line_by_line_INT_string=True, max_INT = 0): - - '''Tegridy TXT to Intergers Converter - - Input: Input TXT string in the TMIDI-TXT format - - Type of output TXT INT string: line-by-line or one long string - - Maximum absolute integer to process. Maximum is inclusive - Default = process all integers. This helps to remove outliers/unwanted ints - - Output: List of pure intergers - String of intergers in the specified format: line-by-line or one long string - Number of processed integers - Number of skipped integers - - Project Los Angeles - Tegridy Code 2021''' - - print('Tegridy TXT to Intergers Converter') - - output_INT_list = [] - - npi = 0 - nsi = 0 - - TXT_List = list(input_TXT_string) - for char in TXT_List: - if max_INT != 0: - if abs(ord(char)) <= max_INT: - output_INT_list.append(ord(char)) - npi += 1 - else: - nsi += 1 - else: - output_INT_list.append(ord(char)) - npi += 1 - - if line_by_line_INT_string: - output_INT_string = '\n'.join([str(elem) for elem in output_INT_list]) - else: - output_INT_string = ' '.join([str(elem) for elem in output_INT_list]) - - print('Converted TXT to INTs:', npi, ' / ', nsi) - - return output_INT_list, output_INT_string, npi, nsi - -################################################################################### - -def Tegridy_INT_to_TXT_Converter(input_INT_list): - - '''Tegridy Intergers to TXT Converter - - Input: List of intergers in TMIDI-TXT-INT format - Output: Decoded TXT string in TMIDI-TXT format - Project Los Angeles - Tegridy Code 2020''' - - output_TXT_string = '' - - for i in input_INT_list: - output_TXT_string += chr(int(i)) - - return output_TXT_string - -################################################################################### - -def Tegridy_INT_String_to_TXT_Converter(input_INT_String, line_by_line_input=True): - - '''Tegridy Intergers String to TXT Converter - - Input: List of intergers in TMIDI-TXT-INT-String format - Output: Decoded TXT string in TMIDI-TXT format - Project Los Angeles - Tegridy Code 2020''' - - print('Tegridy Intergers String to TXT Converter') - - if line_by_line_input: - input_string = input_INT_String.split('\n') - else: - input_string = input_INT_String.split(' ') - - output_TXT_string = '' - - for i in input_string: - try: - output_TXT_string += chr(abs(int(i))) - except: - print('Bad note:', i) - continue - - print('Done!') - - return output_TXT_string - -################################################################################### - -def Tegridy_SONG_to_MIDI_Converter(SONG, - output_signature = 'Tegridy TMIDI Module', - track_name = 'Composition Track', - number_of_ticks_per_quarter = 425, - list_of_MIDI_patches = [0, 24, 32, 40, 42, 46, 56, 71, 73, 0, 0, 0, 0, 0, 0, 0], - output_file_name = 'TMIDI-Composition', - text_encoding='ISO-8859-1'): - - '''Tegridy SONG to MIDI Converter - - Input: Input SONG in TMIDI SONG/MIDI.py Score format - Output MIDI Track 0 name / MIDI Signature - Output MIDI Track 1 name / Composition track name - Number of ticks per quarter for the output MIDI - List of 16 MIDI patch numbers for output MIDI. Def. is MuseNet compatible patches. - Output file name w/o .mid extension. - Optional text encoding if you are working with text_events/lyrics. This is especially useful for Karaoke. Please note that anything but ISO-8859-1 is a non-standard way of encoding text_events according to MIDI specs. - - Output: MIDI File - Detailed MIDI stats - - Project Los Angeles - Tegridy Code 2020''' - - print('Converting to MIDI. Please stand-by...') - - output_header = [number_of_ticks_per_quarter, - [['track_name', 0, bytes(output_signature, text_encoding)]]] - - patch_list = [['patch_change', 0, 0, list_of_MIDI_patches[0]], - ['patch_change', 0, 1, list_of_MIDI_patches[1]], - ['patch_change', 0, 2, list_of_MIDI_patches[2]], - ['patch_change', 0, 3, list_of_MIDI_patches[3]], - ['patch_change', 0, 4, list_of_MIDI_patches[4]], - ['patch_change', 0, 5, list_of_MIDI_patches[5]], - ['patch_change', 0, 6, list_of_MIDI_patches[6]], - ['patch_change', 0, 7, list_of_MIDI_patches[7]], - ['patch_change', 0, 8, list_of_MIDI_patches[8]], - ['patch_change', 0, 9, list_of_MIDI_patches[9]], - ['patch_change', 0, 10, list_of_MIDI_patches[10]], - ['patch_change', 0, 11, list_of_MIDI_patches[11]], - ['patch_change', 0, 12, list_of_MIDI_patches[12]], - ['patch_change', 0, 13, list_of_MIDI_patches[13]], - ['patch_change', 0, 14, list_of_MIDI_patches[14]], - ['patch_change', 0, 15, list_of_MIDI_patches[15]], - ['track_name', 0, bytes(track_name, text_encoding)]] - - output = output_header + [patch_list + SONG] - - midi_data = score2midi(output, text_encoding) - detailed_MIDI_stats = score2stats(output) - - with open(output_file_name + '.mid', 'wb') as midi_file: - midi_file.write(midi_data) - midi_file.close() - - print('Done! Enjoy! :)') - - return detailed_MIDI_stats - -################################################################################### - -def Tegridy_File_Time_Stamp(input_file_name='File_Created_on_', ext = ''): - - '''Tegridy File Time Stamp - - Input: Full path and file name without extention - File extension - - Output: File name string with time-stamp and extension (time-stamped file name) - - Project Los Angeles - Tegridy Code 2021''' - - print('Time-stamping output file...') - - now = '' - now_n = str(datetime.now()) - now_n = now_n.replace(' ', '_') - now_n = now_n.replace(':', '_') - now = now_n.replace('.', '_') - - fname = input_file_name + str(now) + ext - - return(fname) - -################################################################################### - -def Tegridy_Any_Pickle_File_Writer(Data, input_file_name='TMIDI_Pickle_File'): - - '''Tegridy Pickle File Writer - - Input: Data to write (I.e. a list) - Full path and file name without extention - - Output: Named Pickle file - - Project Los Angeles - Tegridy Code 2021''' - - print('Tegridy Pickle File Writer') - - full_path_to_output_dataset_to = input_file_name + '.pickle' - - if os.path.exists(full_path_to_output_dataset_to): - os.remove(full_path_to_output_dataset_to) - print('Removing old Dataset...') - else: - print("Creating new Dataset file...") - - with open(full_path_to_output_dataset_to, 'wb') as filehandle: - # store the data as binary data stream - pickle.dump(Data, filehandle, protocol=pickle.HIGHEST_PROTOCOL) - - print('Dataset was saved as:', full_path_to_output_dataset_to) - print('Task complete. Enjoy! :)') - -################################################################################### - -def Tegridy_Any_Pickle_File_Reader(input_file_name='TMIDI_Pickle_File', ext='.pickle'): - - '''Tegridy Pickle File Loader - - Input: Full path and file name without extention - File extension if different from default .pickle - - Output: Standard Python 3 unpickled data object - - Project Los Angeles - Tegridy Code 2021''' - - print('Tegridy Pickle File Loader') - print('Loading the pickle file. Please wait...') - - with open(input_file_name + ext, 'rb') as pickle_file: - content = pickle.load(pickle_file) - - return content - -################################################################################### - -# TMIDI X Code is below - -################################################################################### - -def Optimus_MIDI_TXT_Processor(MIDI_file, - line_by_line_output=True, - chordify_TXT=False, - dataset_MIDI_events_time_denominator=1, - output_velocity=True, - output_MIDI_channels = False, - MIDI_channel=0, - MIDI_patch=[0, 1], - char_offset = 30000, - transpose_by = 0, - flip=False, - melody_conditioned_encoding=False, - melody_pitch_baseline = 0, - number_of_notes_to_sample = -1, - sampling_offset_from_start = 0, - karaoke=False, - karaoke_language_encoding='utf-8', - song_name='Song', - perfect_timings=False, - musenet_encoding=False, - transform=0, - zero_token=False, - reset_timings=False): - - '''Project Los Angeles - Tegridy Code 2021''' - -########### - - debug = False - - ev = 0 - - chords_list_final = [] - chords_list = [] - events_matrix = [] - melody = [] - melody1 = [] - - itrack = 1 - - min_note = 0 - max_note = 0 - ev = 0 - patch = 0 - - score = [] - rec_event = [] - - txt = '' - txtc = '' - chords = [] - melody_chords = [] - - karaoke_events_matrix = [] - karaokez = [] - - sample = 0 - start_sample = 0 - - bass_melody = [] - - INTS = [] - bints = 0 - -########### - - def list_average(num): - sum_num = 0 - for t in num: - sum_num = sum_num + t - - avg = sum_num / len(num) - return avg - -########### - - #print('Loading MIDI file...') - midi_file = open(MIDI_file, 'rb') - if debug: print('Processing File:', file_address) - - try: - opus = midi2opus(midi_file.read()) - - except: - print('Problematic MIDI. Skipping...') - print('File name:', MIDI_file) - midi_file.close() - return txt, melody, chords - - midi_file.close() - - score1 = to_millisecs(opus) - score2 = opus2score(score1) - - # score2 = opus2score(opus) # TODO Improve score timings when it will be possible. - - if MIDI_channel == 16: # Process all MIDI channels - score = score2 - - if MIDI_channel >= 0 and MIDI_channel <= 15: # Process only a selected single MIDI channel - score = grep(score2, [MIDI_channel]) - - if MIDI_channel == -1: # Process all channels except drums (except channel 9) - score = grep(score2, [0, 1, 2, 3, 4, 5, 6, 7, 8, 10, 11, 12, 13, 14, 15]) - - #print('Reading all MIDI events from the MIDI file...') - while itrack < len(score): - for event in score[itrack]: - - if perfect_timings: - if event[0] == 'note': - event[1] = round(event[1], -1) - event[2] = round(event[2], -1) - - if event[0] == 'text_event' or event[0] == 'lyric' or event[0] == 'note': - if perfect_timings: - event[1] = round(event[1], -1) - karaokez.append(event) - - if event[0] == 'text_event' or event[0] == 'lyric': - if perfect_timings: - event[1] = round(event[1], -1) - try: - event[2] = str(event[2].decode(karaoke_language_encoding, 'replace')).replace('/', '').replace(' ', '').replace('\\', '') - except: - event[2] = str(event[2]).replace('/', '').replace(' ', '').replace('\\', '') - continue - karaoke_events_matrix.append(event) - - if event[0] == 'patch_change': - patch = event[3] - - if event[0] == 'note' and patch in MIDI_patch: - if len(event) == 6: # Checking for bad notes... - eve = copy.deepcopy(event) - - eve[1] = int(event[1] / dataset_MIDI_events_time_denominator) - eve[2] = int(event[2] / dataset_MIDI_events_time_denominator) - - eve[4] = int(event[4] + transpose_by) - - if flip == True: - eve[4] = int(127 - (event[4] + transpose_by)) - - if number_of_notes_to_sample > -1: - if sample <= number_of_notes_to_sample: - if start_sample >= sampling_offset_from_start: - events_matrix.append(eve) - sample += 1 - ev += 1 - else: - start_sample += 1 - - else: - events_matrix.append(eve) - ev += 1 - start_sample += 1 - - itrack +=1 # Going to next track... - - #print('Doing some heavy pythonic sorting...Please stand by...') - - fn = os.path.basename(MIDI_file) - song_name = song_name.replace(' ', '_').replace('=', '_').replace('\'', '-') - if song_name == 'Song': - sng_name = fn.split('.')[0].replace(' ', '_').replace('=', '_').replace('\'', '-') - song_name = sng_name - - # Zero token - if zero_token: - txt += chr(char_offset) + chr(char_offset) - if output_MIDI_channels: - txt += chr(char_offset) - if output_velocity: - txt += chr(char_offset) + chr(char_offset) - else: - txt += chr(char_offset) - - txtc += chr(char_offset) + chr(char_offset) - if output_MIDI_channels: - txtc += chr(char_offset) - if output_velocity: - txtc += chr(char_offset) + chr(char_offset) - else: - txtc += chr(char_offset) - - txt += '=' + song_name + '_with_' + str(len(events_matrix)-1) + '_notes' - txtc += '=' + song_name + '_with_' + str(len(events_matrix)-1) + '_notes' - - else: - # Song stamp - txt += 'SONG=' + song_name + '_with_' + str(len(events_matrix)-1) + '_notes' - txtc += 'SONG=' + song_name + '_with_' + str(len(events_matrix)-1) + '_notes' - - if line_by_line_output: - txt += chr(10) - txtc += chr(10) - else: - txt += chr(32) - txtc += chr(32) - - #print('Sorting input by start time...') - events_matrix.sort(key=lambda x: x[1]) # Sorting input by start time - - #print('Timings converter') - if reset_timings: - ev_matrix = Tegridy_Timings_Converter(events_matrix)[0] - else: - ev_matrix = events_matrix - - chords.extend(ev_matrix) - #print(chords) - - #print('Extracting melody...') - melody_list = [] - - #print('Grouping by start time. This will take a while...') - values = set(map(lambda x:x[1], ev_matrix)) # Non-multithreaded function version just in case - - groups = [[y for y in ev_matrix if y[1]==x and len(y) == 6] for x in values] # Grouping notes into chords while discarting bad notes... - - #print('Sorting events...') - for items in groups: - - items.sort(reverse=True, key=lambda x: x[4]) # Sorting events by pitch - - if melody_conditioned_encoding: items[0][3] = 0 # Melody should always bear MIDI Channel 0 for code to work - - melody_list.append(items[0]) # Creating final melody list - melody_chords.append(items) # Creating final chords list - bass_melody.append(items[-1]) # Creating final bass melody list - - # [WIP] Melody-conditioned chords list - if melody_conditioned_encoding == True: - if not karaoke: - - previous_event = copy.deepcopy(melody_chords[0][0]) - - for ev in melody_chords: - hp = True - ev.sort(reverse=False, key=lambda x: x[4]) # Sorting chord events by pitch - for event in ev: - - # Computing events details - start_time = int(abs(event[1] - previous_event[1])) - - duration = int(previous_event[2]) - - if hp == True: - if int(previous_event[4]) >= melody_pitch_baseline: - channel = int(0) - hp = False - else: - channel = int(previous_event[3]+1) - hp = False - else: - channel = int(previous_event[3]+1) - hp = False - - pitch = int(previous_event[4]) - - velocity = int(previous_event[5]) - - # Writing INTergerS... - try: - INTS.append([(start_time)+char_offset, (duration)+char_offset, channel+char_offset, pitch+char_offset, velocity+char_offset]) - except: - bints += 1 - - # Converting to TXT if possible... - try: - txtc += str(chr(start_time + char_offset)) - txtc += str(chr(duration + char_offset)) - txtc += str(chr(pitch + char_offset)) - if output_velocity: - txtc += str(chr(velocity + char_offset)) - if output_MIDI_channels: - txtc += str(chr(channel + char_offset)) - - if line_by_line_output: - - - txtc += chr(10) - else: - - txtc += chr(32) - - previous_event = copy.deepcopy(event) - - except: - # print('Problematic MIDI event! Skipping...') - continue - - if not line_by_line_output: - txtc += chr(10) - - txt = txtc - chords = melody_chords - - # Default stuff (not melody-conditioned/not-karaoke) - else: - if not karaoke: - melody_chords.sort(reverse=False, key=lambda x: x[0][1]) - mel_chords = [] - for mc in melody_chords: - mel_chords.extend(mc) - - if transform != 0: - chords = Tegridy_Transform(mel_chords, transform) - else: - chords = mel_chords - - # TXT Stuff - previous_event = copy.deepcopy(chords[0]) - for event in chords: - - # Computing events details - start_time = int(abs(event[1] - previous_event[1])) - - duration = int(previous_event[2]) - - channel = int(previous_event[3]) - - pitch = int(previous_event[4] + transpose_by) - if flip == True: - pitch = 127 - int(previous_event[4] + transpose_by) - - velocity = int(previous_event[5]) - - # Writing INTergerS... - try: - INTS.append([(start_time)+char_offset, (duration)+char_offset, channel+char_offset, pitch+char_offset, velocity+char_offset]) - except: - bints += 1 - - # Converting to TXT if possible... - try: - txt += str(chr(start_time + char_offset)) - txt += str(chr(duration + char_offset)) - txt += str(chr(pitch + char_offset)) - if output_velocity: - txt += str(chr(velocity + char_offset)) - if output_MIDI_channels: - txt += str(chr(channel + char_offset)) - - - if chordify_TXT == True and int(event[1] - previous_event[1]) == 0: - txt += '' - else: - if line_by_line_output: - txt += chr(10) - else: - txt += chr(32) - - previous_event = copy.deepcopy(event) - - except: - # print('Problematic MIDI event. Skipping...') - continue - - if not line_by_line_output: - txt += chr(10) - - # Karaoke stuff - if karaoke: - - melody_chords.sort(reverse=False, key=lambda x: x[0][1]) - mel_chords = [] - for mc in melody_chords: - mel_chords.extend(mc) - - if transform != 0: - chords = Tegridy_Transform(mel_chords, transform) - else: - chords = mel_chords - - previous_event = copy.deepcopy(chords[0]) - for event in chords: - - # Computing events details - start_time = int(abs(event[1] - previous_event[1])) - - duration = int(previous_event[2]) - - channel = int(previous_event[3]) - - pitch = int(previous_event[4] + transpose_by) - - velocity = int(previous_event[5]) - - # Converting to TXT - txt += str(chr(start_time + char_offset)) - txt += str(chr(duration + char_offset)) - txt += str(chr(pitch + char_offset)) - - txt += str(chr(velocity + char_offset)) - txt += str(chr(channel + char_offset)) - - if start_time > 0: - for k in karaoke_events_matrix: - if event[1] == k[1]: - txt += str('=') - txt += str(k[2]) - break - - if line_by_line_output: - txt += chr(10) - else: - txt += chr(32) - - previous_event = copy.deepcopy(event) - - if not line_by_line_output: - txt += chr(10) - - # Final processing code... - # ======================================================================= - - # Helper aux/backup function for Karaoke - karaokez.sort(reverse=False, key=lambda x: x[1]) - - # MuseNet sorting - if musenet_encoding and not melody_conditioned_encoding and not karaoke: - chords.sort(key=lambda x: (x[1], x[3])) - - # Final melody sort - melody_list.sort() - - # auxs for future use - aux1 = [None] - aux2 = [None] - - return txt, melody_list, chords, bass_melody, karaokez, INTS, aux1, aux2 # aux1 and aux2 are not used atm - -################################################################################### - -def Optimus_TXT_to_Notes_Converter(Optimus_TXT_String, - line_by_line_dataset = True, - has_velocities = True, - has_MIDI_channels = True, - dataset_MIDI_events_time_denominator = 1, - char_encoding_offset = 30000, - save_only_first_composition = True, - simulate_velocity=True, - karaoke=False, - zero_token=False): - - '''Project Los Angeles - Tegridy Code 2020''' - - print('Tegridy Optimus TXT to Notes Converter') - print('Converting TXT to Notes list...Please wait...') - - song_name = '' - - if line_by_line_dataset: - input_string = Optimus_TXT_String.split('\n') - else: - input_string = Optimus_TXT_String.split(' ') - - if line_by_line_dataset: - name_string = Optimus_TXT_String.split('\n')[0].split('=') - else: - name_string = Optimus_TXT_String.split(' ')[0].split('=') - - # Zero token - zt = '' - - zt += chr(char_encoding_offset) + chr(char_encoding_offset) - - if has_MIDI_channels: - zt += chr(char_encoding_offset) - - if has_velocities: - zt += chr(char_encoding_offset) + chr(char_encoding_offset) - - else: - zt += chr(char_encoding_offset) - - if zero_token: - if name_string[0] == zt: - song_name = name_string[1] - - else: - if name_string[0] == 'SONG': - song_name = name_string[1] - - output_list = [] - st = 0 - - for i in range(2, len(input_string)-1): - - if save_only_first_composition: - if zero_token: - if input_string[i].split('=')[0] == zt: - - song_name = name_string[1] - break - - else: - if input_string[i].split('=')[0] == 'SONG': - - song_name = name_string[1] - break - try: - istring = input_string[i] - - if has_MIDI_channels == False: - step = 4 - - if has_MIDI_channels == True: - step = 5 - - if has_velocities == False: - step -= 1 - - st += int(ord(istring[0]) - char_encoding_offset) * dataset_MIDI_events_time_denominator - - if not karaoke: - for s in range(0, len(istring), step): - if has_MIDI_channels==True: - if step > 3 and len(istring) > 2: - out = [] - out.append('note') - - out.append(st) # Start time - - out.append(int(ord(istring[s+1]) - char_encoding_offset) * dataset_MIDI_events_time_denominator) # Duration - - if has_velocities: - out.append(int(ord(istring[s+4]) - char_encoding_offset)) # Channel - else: - out.append(int(ord(istring[s+3]) - char_encoding_offset)) # Channel - - out.append(int(ord(istring[s+2]) - char_encoding_offset)) # Pitch - - if simulate_velocity: - if s == 0: - sim_vel = int(ord(istring[s+2]) - char_encoding_offset) - out.append(sim_vel) # Simulated Velocity (= highest note's pitch) - else: - out.append(int(ord(istring[s+3]) - char_encoding_offset)) # Velocity - - if has_MIDI_channels==False: - if step > 3 and len(istring) > 2: - out = [] - out.append('note') - - out.append(st) # Start time - out.append(int(ord(istring[s+1]) - char_encoding_offset) * dataset_MIDI_events_time_denominator) # Duration - out.append(0) # Channel - out.append(int(ord(istring[s+2]) - char_encoding_offset)) # Pitch - - if simulate_velocity: - if s == 0: - sim_vel = int(ord(istring[s+2]) - char_encoding_offset) - out.append(sim_vel) # Simulated Velocity (= highest note's pitch) - else: - out.append(int(ord(istring[s+3]) - char_encoding_offset)) # Velocity - - if step == 3 and len(istring) > 2: - out = [] - out.append('note') - - out.append(st) # Start time - out.append(int(ord(istring[s+1]) - char_encoding_offset) * dataset_MIDI_events_time_denominator) # Duration - out.append(0) # Channel - out.append(int(ord(istring[s+2]) - char_encoding_offset)) # Pitch - - out.append(int(ord(istring[s+2]) - char_encoding_offset)) # Velocity = Pitch - - output_list.append(out) - - if karaoke: - try: - out = [] - out.append('note') - - out.append(st) # Start time - out.append(int(ord(istring[1]) - char_encoding_offset) * dataset_MIDI_events_time_denominator) # Duration - out.append(int(ord(istring[4]) - char_encoding_offset)) # Channel - out.append(int(ord(istring[2]) - char_encoding_offset)) # Pitch - - if simulate_velocity: - if s == 0: - sim_vel = int(ord(istring[2]) - char_encoding_offset) - out.append(sim_vel) # Simulated Velocity (= highest note's pitch) - else: - out.append(int(ord(istring[3]) - char_encoding_offset)) # Velocity - output_list.append(out) - out = [] - if istring.split('=')[1] != '': - out.append('lyric') - out.append(st) - out.append(istring.split('=')[1]) - output_list.append(out) - except: - continue - - - except: - print('Bad note string:', istring) - continue - - # Simple error control just in case - S = [] - for x in output_list: - if len(x) == 6 or len(x) == 3: - S.append(x) - - output_list.clear() - output_list = copy.deepcopy(S) - - - print('Task complete! Enjoy! :)') - - return output_list, song_name - -################################################################################### - -def Optimus_Data2TXT_Converter(data, - dataset_time_denominator=1, - transpose_by = 0, - char_offset = 33, - line_by_line_output = True, - output_velocity = False, - output_MIDI_channels = False): - - - '''Input: data as a flat chords list of flat chords lists - - Output: TXT string - INTs - - Project Los Angeles - Tegridy Code 2021''' - - txt = '' - TXT = '' - - quit = False - counter = 0 - - INTs = [] - INTs_f = [] - - for d in tqdm.tqdm(sorted(data)): - - if quit == True: - break - - txt = 'SONG=' + str(counter) - counter += 1 - - if line_by_line_output: - txt += chr(10) - else: - txt += chr(32) - - INTs = [] - - # TXT Stuff - previous_event = copy.deepcopy(d[0]) - for event in sorted(d): - - # Computing events details - start_time = int(abs(event[1] - previous_event[1]) / dataset_time_denominator) - - duration = int(previous_event[2] / dataset_time_denominator) - - channel = int(previous_event[3]) - - pitch = int(previous_event[4] + transpose_by) - - velocity = int(previous_event[5]) - - INTs.append([start_time, duration, pitch]) - - # Converting to TXT if possible... - try: - txt += str(chr(start_time + char_offset)) - txt += str(chr(duration + char_offset)) - txt += str(chr(pitch + char_offset)) - if output_velocity: - txt += str(chr(velocity + char_offset)) - if output_MIDI_channels: - txt += str(chr(channel + char_offset)) - - if line_by_line_output: - txt += chr(10) - else: - txt += chr(32) - - previous_event = copy.deepcopy(event) - except KeyboardInterrupt: - quit = True - break - except: - print('Problematic MIDI data. Skipping...') - continue - - if not line_by_line_output: - txt += chr(10) - - TXT += txt - INTs_f.extend(INTs) - - return TXT, INTs_f - -################################################################################### - -def Optimus_Squash(chords_list, simulate_velocity=True, mono_compression=False): - - '''Input: Flat chords list - Simulate velocity or not - Mono-compression enabled or disabled - - Default is almost lossless 25% compression, otherwise, lossy 50% compression (mono-compression) - - Output: Squashed chords list - Resulting compression level - - Please note that if drums are passed through as is - - Project Los Angeles - Tegridy Code 2021''' - - output = [] - ptime = 0 - vel = 0 - boost = 15 - stptc = [] - ocount = 0 - rcount = 0 - - for c in chords_list: - - cc = copy.deepcopy(c) - ocount += 1 - - if [cc[1], cc[3], (cc[4] % 12) + 60] not in stptc: - stptc.append([cc[1], cc[3], (cc[4] % 12) + 60]) - - if cc[3] != 9: - cc[4] = (c[4] % 12) + 60 - - if simulate_velocity and c[1] != ptime: - vel = c[4] + boost - - if cc[3] != 9: - cc[5] = vel - - if mono_compression: - if c[1] != ptime: - output.append(cc) - rcount += 1 - else: - output.append(cc) - rcount += 1 - - ptime = c[1] - - output.sort(key=lambda x: (x[1], x[4])) - - comp_level = 100 - int((rcount * 100) / ocount) - - return output, comp_level - -################################################################################### - -def Optimus_Signature(chords_list, calculate_full_signature=False): - - '''Optimus Signature - - ---In the name of the search for a perfect score slice signature--- - - Input: Flat chords list to evaluate - - Output: Full Optimus Signature as a list - Best/recommended Optimus Signature as a list - - Project Los Angeles - Tegridy Code 2021''' - - # Pitches - - ## StDev - if calculate_full_signature: - psd = statistics.stdev([int(y[4]) for y in chords_list]) - else: - psd = 0 - - ## Median - pmh = statistics.median_high([int(y[4]) for y in chords_list]) - pm = statistics.median([int(y[4]) for y in chords_list]) - pml = statistics.median_low([int(y[4]) for y in chords_list]) - - ## Mean - if calculate_full_signature: - phm = statistics.harmonic_mean([int(y[4]) for y in chords_list]) - else: - phm = 0 - - # Durations - dur = statistics.median([int(y[2]) for y in chords_list]) - - # Velocities - - vel = statistics.median([int(y[5]) for y in chords_list]) - - # Beats - mtds = statistics.median([int(abs(chords_list[i-1][1]-chords_list[i][1])) for i in range(1, len(chords_list))]) - if calculate_full_signature: - hmtds = statistics.harmonic_mean([int(abs(chords_list[i-1][1]-chords_list[i][1])) for i in range(1, len(chords_list))]) - else: - hmtds = 0 - - # Final Optimus signatures - full_Optimus_signature = [round(psd), round(pmh), round(pm), round(pml), round(phm), round(dur), round(vel), round(mtds), round(hmtds)] - ######################## PStDev PMedianH PMedian PMedianL PHarmoMe Duration Velocity Beat HarmoBeat - - best_Optimus_signature = [round(pmh), round(pm), round(pml), round(dur, -1), round(vel, -1), round(mtds, -1)] - ######################## PMedianH PMedian PMedianL Duration Velocity Beat - - # Return... - return full_Optimus_signature, best_Optimus_signature - - -################################################################################### -# -# TMIDI 2.0 Helper functions -# -################################################################################### - -def Tegridy_FastSearch(needle, haystack, randomize = False): - - ''' - - Input: Needle iterable - Haystack iterable - Randomize search range (this prevents determinism) - - Output: Start index of the needle iterable in a haystack iterable - If nothing found, -1 is returned - - Project Los Angeles - Tegridy Code 2021''' - - need = copy.deepcopy(needle) - - try: - if randomize: - idx = haystack.index(need, secrets.randbelow(len(haystack)-len(need))) - else: - idx = haystack.index(need) - - except KeyboardInterrupt: - return -1 - - except: - return -1 - - return idx - -################################################################################### - -def Tegridy_Chord_Match(chord1, chord2, match_type=2): - - '''Tegridy Chord Match - - Input: Two chords to evaluate - Match type: 2 = duration, channel, pitch, velocity - 3 = channel, pitch, velocity - 4 = pitch, velocity - 5 = velocity - - Output: Match rating (0-100) - NOTE: Match rating == -1 means identical source chords - NOTE: Match rating == 100 means mutual shortest chord - - Project Los Angeles - Tegridy Code 2021''' - - match_rating = 0 - - if chord1 == []: - return 0 - if chord2 == []: - return 0 - - if chord1 == chord2: - return -1 - - else: - zipped_pairs = list(zip(chord1, chord2)) - zipped_diff = abs(len(chord1) - len(chord2)) - - short_match = [False] - for pair in zipped_pairs: - cho1 = ' '.join([str(y) for y in pair[0][match_type:]]) - cho2 = ' '.join([str(y) for y in pair[1][match_type:]]) - if cho1 == cho2: - short_match.append(True) - else: - short_match.append(False) - - if True in short_match: - return 100 - - pairs_ratings = [] - - for pair in zipped_pairs: - cho1 = ' '.join([str(y) for y in pair[0][match_type:]]) - cho2 = ' '.join([str(y) for y in pair[1][match_type:]]) - pairs_ratings.append(SM(None, cho1, cho2).ratio()) - - match_rating = sum(pairs_ratings) / len(pairs_ratings) * 100 - - return match_rating - -################################################################################### - -def Tegridy_Last_Chord_Finder(chords_list): - - '''Tegridy Last Chord Finder - - Input: Flat chords list - - Output: Last detected chord of the chords list - Last chord start index in the original chords list - First chord end index in the original chords list - - Project Los Angeles - Tegridy Code 2021''' - - chords = [] - cho = [] - - ptime = 0 - - i = 0 - - pc_idx = 0 - fc_idx = 0 - - chords_list.sort(reverse=False, key=lambda x: x[1]) - - for cc in chords_list: - - if cc[1] == ptime: - - cho.append(cc) - - ptime = cc[1] - - else: - if pc_idx == 0: - fc_idx = chords_list.index(cc) - pc_idx = chords_list.index(cc) - - chords.append(cho) - - cho = [] - - cho.append(cc) - - ptime = cc[1] - - i += 1 - - if cho != []: - chords.append(cho) - i += 1 - - return chords_list[pc_idx:], pc_idx, fc_idx - -################################################################################### - -def Tegridy_Chords_Generator(chords_list, shuffle_pairs = True, remove_single_notes=False): - - '''Tegridy Score Chords Pairs Generator - - Input: Flat chords list - Shuffle pairs (recommended) - - Output: List of chords - - Average time(ms) per chord - Average time(ms) per pitch - Average chords delta time - - Average duration - Average channel - Average pitch - Average velocity - - Project Los Angeles - Tegridy Code 2021''' - - chords = [] - cho = [] - - i = 0 - - # Sort by start time - chords_list.sort(reverse=False, key=lambda x: x[1]) - - # Main loop - pcho = chords_list[0] - for cc in chords_list: - if cc[1] == pcho[1]: - - cho.append(cc) - pcho = copy.deepcopy(cc) - - else: - if not remove_single_notes: - chords.append(cho) - cho = [] - cho.append(cc) - pcho = copy.deepcopy(cc) - - i += 1 - else: - if len(cho) > 1: - chords.append(cho) - cho = [] - cho.append(cc) - pcho = copy.deepcopy(cc) - - i += 1 - - # Averages - t0 = chords[0][0][1] - t1 = chords[-1][-1][1] - tdel = abs(t1 - t0) - avg_ms_per_chord = int(tdel / i) - avg_ms_per_pitch = int(tdel / len(chords_list)) - - # Delta time - tds = [int(abs(chords_list[i-1][1]-chords_list[i][1]) / 1) for i in range(1, len(chords_list))] - if len(tds) != 0: avg_delta_time = int(sum(tds) / len(tds)) - - # Chords list attributes - p = int(sum([int(y[4]) for y in chords_list]) / len(chords_list)) - d = int(sum([int(y[2]) for y in chords_list]) / len(chords_list)) - c = int(sum([int(y[3]) for y in chords_list]) / len(chords_list)) - v = int(sum([int(y[5]) for y in chords_list]) / len(chords_list)) - - # Final shuffle - if shuffle_pairs: - random.shuffle(chords) - - return chords, [avg_ms_per_chord, avg_ms_per_pitch, avg_delta_time], [d, c, p, v] - -################################################################################### - -def Tegridy_Chords_List_Music_Features(chords_list, st_dur_div = 1, pitch_div = 1, vel_div = 1): - - '''Tegridy Chords List Music Features - - Input: Flat chords list - - Output: A list of the extracted chords list's music features - - Project Los Angeles - Tegridy Code 2021''' - - chords_list1 = [x for x in chords_list if x] - chords_list1.sort(reverse=False, key=lambda x: x[1]) - - # Features extraction code - - melody_list = [] - bass_melody = [] - melody_chords = [] - mel_avg_tds = [] - mel_chrd_avg_tds = [] - bass_melody_avg_tds = [] - - #print('Grouping by start time. This will take a while...') - values = set(map(lambda x:x[1], chords_list1)) # Non-multithreaded function version just in case - - groups = [[y for y in chords_list1 if y[1]==x and len(y) == 6] for x in values] # Grouping notes into chords while discarting bad notes... - - #print('Sorting events...') - for items in groups: - items.sort(reverse=True, key=lambda x: x[4]) # Sorting events by pitch - melody_list.append(items[0]) # Creating final melody list - melody_chords.append(items) # Creating final chords list - bass_melody.append(items[-1]) # Creating final bass melody list - - #print('Final sorting by start time...') - melody_list.sort(reverse=False, key=lambda x: x[1]) # Sorting events by start time - melody_chords.sort(reverse=False, key=lambda x: x[0][1]) # Sorting events by start time - bass_melody.sort(reverse=False, key=lambda x: x[1]) # Sorting events by start time - - # Extracting music features from the chords list - - # Melody features - mel_avg_pitch = int(sum([y[4] for y in melody_list]) / len(melody_list) / pitch_div) - mel_avg_dur = int(sum([int(y[2] / st_dur_div) for y in melody_list]) / len(melody_list)) - mel_avg_vel = int(sum([int(y[5] / vel_div) for y in melody_list]) / len(melody_list)) - mel_avg_chan = int(sum([int(y[3]) for y in melody_list]) / len(melody_list)) - - mel_tds = [int(abs(melody_list[i-1][1]-melody_list[i][1])) for i in range(1, len(melody_list))] - if len(mel_tds) != 0: mel_avg_tds = int(sum(mel_tds) / len(mel_tds) / st_dur_div) - - melody_features = [mel_avg_tds, mel_avg_dur, mel_avg_chan, mel_avg_pitch, mel_avg_vel] - - # Chords list features - mel_chrd_avg_pitch = int(sum([y[4] for y in chords_list1]) / len(chords_list1) / pitch_div) - mel_chrd_avg_dur = int(sum([int(y[2] / st_dur_div) for y in chords_list1]) / len(chords_list1)) - mel_chrd_avg_vel = int(sum([int(y[5] / vel_div) for y in chords_list1]) / len(chords_list1)) - mel_chrd_avg_chan = int(sum([int(y[3]) for y in chords_list1]) / len(chords_list1)) - - mel_chrd_tds = [int(abs(chords_list1[i-1][1]-chords_list1[i][1])) for i in range(1, len(chords_list1))] - if len(mel_tds) != 0: mel_chrd_avg_tds = int(sum(mel_chrd_tds) / len(mel_chrd_tds) / st_dur_div) - - chords_list_features = [mel_chrd_avg_tds, mel_chrd_avg_dur, mel_chrd_avg_chan, mel_chrd_avg_pitch, mel_chrd_avg_vel] - - # Bass melody features - bass_melody_avg_pitch = int(sum([y[4] for y in bass_melody]) / len(bass_melody) / pitch_div) - bass_melody_avg_dur = int(sum([int(y[2] / st_dur_div) for y in bass_melody]) / len(bass_melody)) - bass_melody_avg_vel = int(sum([int(y[5] / vel_div) for y in bass_melody]) / len(bass_melody)) - bass_melody_avg_chan = int(sum([int(y[3]) for y in bass_melody]) / len(bass_melody)) - - bass_melody_tds = [int(abs(bass_melody[i-1][1]-bass_melody[i][1])) for i in range(1, len(bass_melody))] - if len(bass_melody_tds) != 0: bass_melody_avg_tds = int(sum(bass_melody_tds) / len(bass_melody_tds) / st_dur_div) - - bass_melody_features = [bass_melody_avg_tds, bass_melody_avg_dur, bass_melody_avg_chan, bass_melody_avg_pitch, bass_melody_avg_vel] - - # A list to return all features - music_features = [] - - music_features.extend([len(chords_list1)]) # Count of the original chords list notes - - music_features.extend(melody_features) # Extracted melody features - music_features.extend(chords_list_features) # Extracted chords list features - music_features.extend(bass_melody_features) # Extracted bass melody features - music_features.extend([sum([y[4] for y in chords_list1])]) # Sum of all pitches in the original chords list - - return music_features - -################################################################################### - -def Tegridy_Transform(chords_list, to_pitch=60, to_velocity=-1): - - '''Tegridy Transform - - Input: Flat chords list - Desired average pitch (-1 == no change) - Desired average velocity (-1 == no change) - - Output: Transformed flat chords list - - Project Los Angeles - Tegridy Code 2021''' - - transformed_chords_list = [] - - chords_list.sort(reverse=False, key=lambda x: x[1]) - - chords_list_features = Optimus_Signature(chords_list)[1] - - pitch_diff = int((chords_list_features[0] + chords_list_features[1] + chords_list_features[2]) / 3) - to_pitch - velocity_diff = chords_list_features[4] - to_velocity - - for c in chords_list: - cc = copy.deepcopy(c) - if c[3] != 9: # Except the drums - if to_pitch != -1: - cc[4] = c[4] - pitch_diff - - if to_velocity != -1: - cc[5] = c[5] - velocity_diff - - transformed_chords_list.append(cc) - - return transformed_chords_list - -################################################################################### - -def Tegridy_MIDI_Zip_Notes_Summarizer(chords_list, match_type = 4): - - '''Tegridy MIDI Zip Notes Summarizer - - Input: Flat chords list / SONG - Match type according to 'note' event of MIDI.py - - Output: Summarized chords list - Number of summarized notes - Number of dicarted notes - - Project Los Angeles - Tegridy Code 2021''' - - i = 0 - j = 0 - out1 = [] - pout = [] - - - for o in chords_list: - - # MIDI Zip - - if o[match_type:] not in pout: - pout.append(o[match_type:]) - - out1.append(o) - j += 1 - - else: - i += 1 - - return out1, i - -################################################################################### - -def Tegridy_Score_Chords_Pairs_Generator(chords_list, shuffle_pairs = True, remove_single_notes=False): - - '''Tegridy Score Chords Pairs Generator - - Input: Flat chords list - Shuffle pairs (recommended) - - Output: Score chords pairs list - Number of created pairs - Number of detected chords - - Project Los Angeles - Tegridy Code 2021''' - - chords = [] - cho = [] - - i = 0 - j = 0 - - chords_list.sort(reverse=False, key=lambda x: x[1]) - pcho = chords_list[0] - for cc in chords_list: - if cc[1] == pcho[1]: - - cho.append(cc) - pcho = copy.deepcopy(cc) - - else: - if not remove_single_notes: - chords.append(cho) - cho = [] - cho.append(cc) - pcho = copy.deepcopy(cc) - - i += 1 - else: - if len(cho) > 1: - chords.append(cho) - cho = [] - cho.append(cc) - pcho = copy.deepcopy(cc) - - i += 1 - - chords_pairs = [] - for i in range(len(chords)-1): - chords_pairs.append([chords[i], chords[i+1]]) - j += 1 - if shuffle_pairs: random.shuffle(chords_pairs) - - return chords_pairs, j, i - -################################################################################### - -def Tegridy_Sliced_Score_Pairs_Generator(chords_list, number_of_miliseconds_per_slice=2000, shuffle_pairs = False): - - '''Tegridy Sliced Score Pairs Generator - - Input: Flat chords list - Number of miliseconds per slice - - Output: Sliced score pairs list - Number of created slices - - Project Los Angeles - Tegridy Code 2021''' - - chords = [] - cho = [] - - time = number_of_miliseconds_per_slice - - i = 0 - - chords_list1 = [x for x in chords_list if x] - chords_list1.sort(reverse=False, key=lambda x: x[1]) - pcho = chords_list1[0] - for cc in chords_list1[1:]: - - if cc[1] <= time: - - cho.append(cc) - - else: - if cho != [] and pcho != []: chords.append([pcho, cho]) - pcho = copy.deepcopy(cho) - cho = [] - cho.append(cc) - time += number_of_miliseconds_per_slice - i += 1 - - if cho != [] and pcho != []: - chords.append([pcho, cho]) - pcho = copy.deepcopy(cho) - i += 1 - - if shuffle_pairs: random.shuffle(chords) - - return chords, i - -################################################################################### - -def Tegridy_Timings_Converter(chords_list, - max_delta_time = 1000, - fixed_start_time = 250, - start_time = 0, - start_time_multiplier = 1, - durations_multiplier = 1): - - '''Tegridy Timings Converter - - Input: Flat chords list - Max delta time allowed between notes - Fixed start note time for excessive gaps - - Output: Converted flat chords list - - Project Los Angeles - Tegridy Code 2021''' - - song = chords_list - - song1 = [] - - p = song[0] - - p[1] = start_time - - time = start_time - - delta = [0] - - for i in range(len(song)): - if song[i][0] == 'note': - ss = copy.deepcopy(song[i]) - if song[i][1] != p[1]: - - if abs(song[i][1] - p[1]) > max_delta_time: - time += fixed_start_time - else: - time += abs(song[i][1] - p[1]) - delta.append(abs(song[i][1] - p[1])) - - ss[1] = int(round(time * start_time_multiplier, -1)) - ss[2] = int(round(song[i][2] * durations_multiplier, -1)) - song1.append(ss) - - p = copy.deepcopy(song[i]) - else: - - ss[1] = int(round(time * start_time_multiplier, -1)) - ss[2] = int(round(song[i][2] * durations_multiplier, -1)) - song1.append(ss) - - p = copy.deepcopy(song[i]) - - else: - ss = copy.deepcopy(song[i]) - ss[1] = time - song1.append(ss) - - average_delta_st = int(sum(delta) / len(delta)) - average_duration = int(sum([y[2] for y in song1 if y[0] == 'note']) / len([y[2] for y in song1 if y[0] == 'note'])) - - song1.sort(reverse=False, key=lambda x: x[1]) - - return song1, time, average_delta_st, average_duration - -################################################################################### - -def Tegridy_Score_Slicer(chords_list, number_of_miliseconds_per_slice=2000, overlap_notes = 0, overlap_chords=False): - - '''Tegridy Score Slicer - - Input: Flat chords list - Number of miliseconds per slice - - Output: Sliced chords list - Number of created slices - - Project Los Angeles - Tegridy Code 2021''' - - chords = [] - cho = [] - - time = number_of_miliseconds_per_slice - ptime = 0 - - i = 0 - - pc_idx = 0 - - chords_list.sort(reverse=False, key=lambda x: x[1]) - - for cc in chords_list: - - if cc[1] <= time: - - cho.append(cc) - - if ptime != cc[1]: - pc_idx = cho.index(cc) - - ptime = cc[1] - - - else: - - if overlap_chords: - chords.append(cho) - cho.extend(chords[-1][pc_idx:]) - - else: - chords.append(cho[:pc_idx]) - - cho = [] - - cho.append(cc) - - time += number_of_miliseconds_per_slice - ptime = cc[1] - - i += 1 - - if cho != []: - chords.append(cho) - i += 1 - - return [x for x in chords if x], i - -################################################################################### - -def Tegridy_TXT_Tokenizer(input_TXT_string, line_by_line_TXT_string=True): - - '''Tegridy TXT Tokenizer - - Input: TXT String - - Output: Tokenized TXT string + forward and reverse dics - - Project Los Angeles - Tegridy Code 2021''' - - print('Tegridy TXT Tokenizer') - - if line_by_line_TXT_string: - T = input_TXT_string.split() - else: - T = input_TXT_string.split(' ') - - DIC = dict(zip(T, range(len(T)))) - RDIC = dict(zip(range(len(T)), T)) - - TXTT = '' - - for t in T: - try: - TXTT += chr(DIC[t]) - except: - print('Error. Could not finish.') - return TXTT, DIC, RDIC - - print('Done!') - - return TXTT, DIC, RDIC - -################################################################################### - -def Tegridy_TXT_DeTokenizer(input_Tokenized_TXT_string, RDIC): - - '''Tegridy TXT Tokenizer - - Input: Tokenized TXT String - - - Output: DeTokenized TXT string - - Project Los Angeles - Tegridy Code 2021''' - - print('Tegridy TXT DeTokenizer') - - Q = list(input_Tokenized_TXT_string) - c = 0 - RTXT = '' - for q in Q: - try: - RTXT += RDIC[ord(q)] + chr(10) - except: - c+=1 - - print('Number of errors:', c) - - print('Done!') - - return RTXT - -################################################################################### - -def Tegridy_List_Slicer(input_list, slices_length_in_notes=20): - - '''Input: List to slice - Desired slices length in notes - - Output: Sliced list of lists - - Project Los Angeles - Tegridy Code 2021''' - - for i in range(0, len(input_list), slices_length_in_notes): - yield input_list[i:i + slices_length_in_notes] - -################################################################################### - -def Tegridy_Split_List(list_to_split, split_value=0): - - # src courtesy of www.geeksforgeeks.org - - # using list comprehension + zip() + slicing + enumerate() - # Split list into lists by particular value - size = len(list_to_split) - idx_list = [idx + 1 for idx, val in - enumerate(list_to_split) if val == split_value] - - - res = [list_to_split[i: j] for i, j in - zip([0] + idx_list, idx_list + - ([size] if idx_list[-1] != size else []))] - - # print result - # print("The list after splitting by a value : " + str(res)) - - return res - -################################################################################### - -# This is the end of the TMIDI X Python module - -################################################################################### diff --git a/spaces/atimughal662/InfoFusion/src/enums.py b/spaces/atimughal662/InfoFusion/src/enums.py deleted file mode 100644 index c6013be6c9e773834cfb23e8c6cd856839856782..0000000000000000000000000000000000000000 --- a/spaces/atimughal662/InfoFusion/src/enums.py +++ /dev/null @@ -1,225 +0,0 @@ -from enum import Enum - - -class PromptType(Enum): - custom = -1 - plain = 0 - instruct = 1 - quality = 2 - human_bot = 3 - dai_faq = 4 - summarize = 5 - simple_instruct = 6 - instruct_vicuna = 7 - instruct_with_end = 8 - human_bot_orig = 9 - prompt_answer = 10 - open_assistant = 11 - wizard_lm = 12 - wizard_mega = 13 - instruct_vicuna2 = 14 - instruct_vicuna3 = 15 - wizard2 = 16 - wizard3 = 17 - instruct_simple = 18 - wizard_vicuna = 19 - openai = 20 - openai_chat = 21 - gptj = 22 - prompt_answer_openllama = 23 - vicuna11 = 24 - mptinstruct = 25 - mptchat = 26 - falcon = 27 - guanaco = 28 - llama2 = 29 - beluga = 30 - wizard3nospace = 31 - one_shot = 32 - falcon_chat = 33 - - -class DocumentSubset(Enum): - Relevant = 0 - RelSources = 1 - TopKSources = 2 - - -non_query_commands = [ - DocumentSubset.RelSources.name, - DocumentSubset.TopKSources.name -] - - -class DocumentChoice(Enum): - ALL = 'All' - - -class LangChainMode(Enum): - """LangChain mode""" - - DISABLED = "Disabled" - LLM = "LLM" - WIKI = "wiki" - WIKI_FULL = "wiki_full" - USER_DATA = "UserData" - MY_DATA = "MyData" - GITHUB_H2OGPT = "github h2oGPT" - H2O_DAI_DOCS = "DriverlessAI docs" - - -class LangChainTypes(Enum): - SHARED = 'shared' - PERSONAL = 'personal' - EITHER = 'either' # used when user did not pass which one, so need to try both - - -# modes should not be removed from visible list or added by name -langchain_modes_intrinsic = [LangChainMode.DISABLED.value, - LangChainMode.LLM.value, - LangChainMode.MY_DATA.value] - -langchain_modes_non_db = [LangChainMode.DISABLED.value, - LangChainMode.LLM.value] - - -class LangChainAction(Enum): - """LangChain action""" - - QUERY = "Query" - # WIP: - # SUMMARIZE_MAP = "Summarize_map_reduce" - SUMMARIZE_MAP = "Summarize" - SUMMARIZE_ALL = "Summarize_all" - SUMMARIZE_REFINE = "Summarize_refine" - - -class LangChainAgent(Enum): - """LangChain agents""" - - SEARCH = "Search" - COLLECTION = "Collection" - PYTHON = "Python" - CSV = "CSV" - PANDAS = "Pandas" - JSON = 'JSON' - - -no_server_str = no_lora_str = no_model_str = '[None/Remove]' - -# from site-packages/langchain/llms/openai.py -# but needed since ChatOpenAI doesn't have this information -model_token_mapping = { - "gpt-4": 8192, - "gpt-4-0314": 8192, - "gpt-4-32k": 32768, - "gpt-4-32k-0314": 32768, - "gpt-3.5-turbo": 4096, - "gpt-3.5-turbo-16k": 16 * 1024, - "gpt-3.5-turbo-0301": 4096, - "text-ada-001": 2049, - "ada": 2049, - "text-babbage-001": 2040, - "babbage": 2049, - "text-curie-001": 2049, - "curie": 2049, - "davinci": 2049, - "text-davinci-003": 4097, - "text-davinci-002": 4097, - "code-davinci-002": 8001, - "code-davinci-001": 8001, - "code-cushman-002": 2048, - "code-cushman-001": 2048, -} - -font_size = 2 -head_acc = 40 # 40 for 6-way -source_prefix = "Sources [Score | Link]:" -source_postfix = "End Sources<p>" - -super_source_prefix = f"""<details><summary><font size="{font_size}">Sources</font></summary><font size="{font_size}"><font size="{font_size}">Sources [Score | Link]:""" -super_source_postfix = f"""End Sources<p></font></font></details>""" - - -def t5_type(model_name): - return 't5' == model_name.lower() or \ - 't5-' in model_name.lower() or \ - 'flan-' in model_name.lower() or \ - 'fastchat-t5' in model_name.lower() - - -def get_langchain_prompts(pre_prompt_query, prompt_query, pre_prompt_summary, prompt_summary, - model_name, inference_server, model_path_llama): - if model_name and ('falcon' in model_name or - 'Llama-2'.lower() in model_name.lower() or - model_path_llama and 'llama-2' in model_path_llama.lower()) or \ - model_name in [None, '']: - # use when no model, like no --base_model - pre_prompt_query1 = "Pay attention and remember the information below, which will help to answer the question or imperative after the context ends.\n" - prompt_query1 = "According to only the information in the document sources provided within the context above, " - elif inference_server and inference_server.startswith('openai'): - pre_prompt_query1 = "Pay attention and remember the information below, which will help to answer the question or imperative after the context ends. If the answer cannot be primarily obtained from information within the context, then respond that the answer does not appear in the context of the documents.\n" - prompt_query1 = "According to (primarily) the information in the document sources provided within context above, " - else: - pre_prompt_query1 = "" - prompt_query1 = "" - - pre_prompt_summary1 = """In order to write a concise single-paragraph or bulleted list summary, pay attention to the following text\n""" - prompt_summary1 = "Using only the information in the document sources above, write a condensed and concise summary of key results (preferably as bullet points):\n" - - if pre_prompt_query is None: - pre_prompt_query = pre_prompt_query1 - if prompt_query is None: - prompt_query = prompt_query1 - if pre_prompt_summary is None: - pre_prompt_summary = pre_prompt_summary1 - if prompt_summary is None: - prompt_summary = prompt_summary1 - - return pre_prompt_query, prompt_query, pre_prompt_summary, prompt_summary - - -def gr_to_lg(image_loaders, - pdf_loaders, - url_loaders, - **kwargs, - ): - if image_loaders is None: - image_loaders = kwargs['image_loaders_options0'] - if pdf_loaders is None: - pdf_loaders = kwargs['pdf_loaders_options0'] - if url_loaders is None: - url_loaders = kwargs['url_loaders_options0'] - # translate: - # 'auto' wouldn't be used here - ret = dict( - # urls - use_unstructured='Unstructured' in url_loaders, - use_playwright='PlayWright' in url_loaders, - use_selenium='Selenium' in url_loaders, - - # pdfs - use_pymupdf='on' if 'PyMuPDF' in pdf_loaders else 'off', - use_unstructured_pdf='on' if 'Unstructured' in pdf_loaders else 'off', - use_pypdf='on' if 'PyPDF' in pdf_loaders else 'off', - enable_pdf_ocr='on' if 'OCR' in pdf_loaders else 'off', - enable_pdf_doctr='on' if 'DocTR' in pdf_loaders else 'off', - try_pdf_as_html='on' if 'TryHTML' in pdf_loaders else 'off', - - # images - enable_ocr='OCR' in image_loaders, - enable_doctr='DocTR' in image_loaders, - enable_pix2struct='Pix2Struct' in image_loaders, - enable_captions='Caption' in image_loaders or 'CaptionBlip2' in image_loaders, - ) - if 'CaptionBlip2' in image_loaders: - # just override, don't actually do both even if user chose both - captions_model = "Salesforce/blip2-flan-t5-xl" - else: - captions_model = kwargs['captions_model'] - return ret, captions_model - - -invalid_key_msg = 'Invalid Access Key, request access key from sales@h2o.ai or jon.mckinney@h2o.ai' - -docs_ordering_types = ['best_first', 'best_near_prompt', 'reverse_ucurve_sort'] diff --git a/spaces/aus10powell/TwitterAccounts/scripts/__init__.py b/spaces/aus10powell/TwitterAccounts/scripts/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/awacke1/ASRGenerateStory/README.md b/spaces/awacke1/ASRGenerateStory/README.md deleted file mode 100644 index 56b5a8a11809f7e19031b3c96a95a86daf20d6ec..0000000000000000000000000000000000000000 --- a/spaces/awacke1/ASRGenerateStory/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: 🗣️ASR Speech 2 Sentiment 2 Save 2 Story 2 Image 2 Video🎥 -emoji: 🗣️ASR❤️🖼️🎥 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.0.24 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/Games-In-Python/README.md b/spaces/awacke1/Games-In-Python/README.md deleted file mode 100644 index 1076340e83db50cdfcc002fe43297968fed3f852..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Games-In-Python/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Games In Python -emoji: 😻 -colorFrom: yellow -colorTo: pink -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/acw-dr-llama-7b-chat/backupapp.py b/spaces/awacke1/acw-dr-llama-7b-chat/backupapp.py deleted file mode 100644 index 413ce04e1cbd0f1a5980b82775afe2791422f4e8..0000000000000000000000000000000000000000 --- a/spaces/awacke1/acw-dr-llama-7b-chat/backupapp.py +++ /dev/null @@ -1,728 +0,0 @@ -# Imports -import base64 -import glob -import json -import math -import openai -import os -import pytz -import re -import requests -import streamlit as st -import textract -import time -import zipfile -import huggingface_hub -import dotenv -from audio_recorder_streamlit import audio_recorder -from bs4 import BeautifulSoup -from collections import deque -from datetime import datetime -from dotenv import load_dotenv -from huggingface_hub import InferenceClient -from io import BytesIO -from langchain.chat_models import ChatOpenAI -from langchain.chains import ConversationalRetrievalChain -from langchain.embeddings import OpenAIEmbeddings -from langchain.memory import ConversationBufferMemory -from langchain.text_splitter import CharacterTextSplitter -from langchain.vectorstores import FAISS -from openai import ChatCompletion -from PyPDF2 import PdfReader -from templates import bot_template, css, user_template -from xml.etree import ElementTree as ET -import streamlit.components.v1 as components # Import Streamlit Components for HTML5 - - -st.set_page_config(page_title="🐪Llama Whisperer🦙 Voice Chat🌟", layout="wide") - - -def add_Med_Licensing_Exam_Dataset(): - import streamlit as st - from datasets import load_dataset - dataset = load_dataset("augtoma/usmle_step_1")['test'] # Using 'test' split - st.title("USMLE Step 1 Dataset Viewer") - if len(dataset) == 0: - st.write("😢 The dataset is empty.") - else: - st.write(""" - 🔍 Use the search box to filter questions or use the grid to scroll through the dataset. - """) - - # 👩‍🔬 Search Box - search_term = st.text_input("Search for a specific question:", "") - - # 🎛 Pagination - records_per_page = 100 - num_records = len(dataset) - num_pages = max(int(num_records / records_per_page), 1) - - # Skip generating the slider if num_pages is 1 (i.e., all records fit in one page) - if num_pages > 1: - page_number = st.select_slider("Select page:", options=list(range(1, num_pages + 1))) - else: - page_number = 1 # Only one page - - # 📊 Display Data - start_idx = (page_number - 1) * records_per_page - end_idx = start_idx + records_per_page - - # 🧪 Apply the Search Filter - filtered_data = [] - for record in dataset[start_idx:end_idx]: - if isinstance(record, dict) and 'text' in record and 'id' in record: - if search_term: - if search_term.lower() in record['text'].lower(): - filtered_data.append(record) - else: - filtered_data.append(record) - - # 🌐 Render the Grid - for record in filtered_data: - st.write(f"## Question ID: {record['id']}") - st.write(f"### Question:") - st.write(f"{record['text']}") - st.write(f"### Answer:") - st.write(f"{record['answer']}") - st.write("---") - - st.write(f"😊 Total Records: {num_records} | 📄 Displaying {start_idx+1} to {min(end_idx, num_records)}") - -# 1. Constants and Top Level UI Variables - -# My Inference API Copy -# API_URL = 'https://qe55p8afio98s0u3.us-east-1.aws.endpoints.huggingface.cloud' # Dr Llama -# Original: -API_URL = "https://api-inference.huggingface.co/models/meta-llama/Llama-2-7b-chat-hf" -API_KEY = os.getenv('API_KEY') -MODEL1="meta-llama/Llama-2-7b-chat-hf" -MODEL1URL="https://huggingface.co/meta-llama/Llama-2-7b-chat-hf" -HF_KEY = os.getenv('HF_KEY') -headers = { - "Authorization": f"Bearer {HF_KEY}", - "Content-Type": "application/json" -} -key = os.getenv('OPENAI_API_KEY') -prompt = f"Write instructions to teach anyone to write a discharge plan. List the entities, features and relationships to CCDA and FHIR objects in boldface." -should_save = st.sidebar.checkbox("💾 Save", value=True, help="Save your session data.") - -# 2. Prompt label button demo for LLM -def add_witty_humor_buttons(): - with st.expander("Wit and Humor 🤣", expanded=True): - # Tip about the Dromedary family - st.markdown("🔬 **Fun Fact**: Dromedaries, part of the camel family, have a single hump and are adapted to arid environments. Their 'superpowers' include the ability to survive without water for up to 7 days, thanks to their specialized blood cells and water storage in their hump.") - - # Define button descriptions - descriptions = { - "Generate Limericks 😂": "Write ten random adult limericks based on quotes that are tweet length and make you laugh 🎭", - "Wise Quotes 🧙": "Generate ten wise quotes that are tweet length 🦉", - "Funny Rhymes 🎤": "Create ten funny rhymes that are tweet length 🎶", - "Medical Jokes 💉": "Create ten medical jokes that are tweet length 🏥", - "Minnesota Humor ❄️": "Create ten jokes about Minnesota that are tweet length 🌨️", - "Top Funny Stories 📖": "Create ten funny stories that are tweet length 📚", - "More Funny Rhymes 🎙️": "Create ten more funny rhymes that are tweet length 🎵" - } - - # Create columns - col1, col2, col3 = st.columns([1, 1, 1], gap="small") - - # Add buttons to columns - if col1.button("Generate Limericks 😂"): - StreamLLMChatResponse(descriptions["Generate Limericks 😂"]) - - if col2.button("Wise Quotes 🧙"): - StreamLLMChatResponse(descriptions["Wise Quotes 🧙"]) - - if col3.button("Funny Rhymes 🎤"): - StreamLLMChatResponse(descriptions["Funny Rhymes 🎤"]) - - col4, col5, col6 = st.columns([1, 1, 1], gap="small") - - if col4.button("Medical Jokes 💉"): - StreamLLMChatResponse(descriptions["Medical Jokes 💉"]) - - if col5.button("Minnesota Humor ❄️"): - StreamLLMChatResponse(descriptions["Minnesota Humor ❄️"]) - - if col6.button("Top Funny Stories 📖"): - StreamLLMChatResponse(descriptions["Top Funny Stories 📖"]) - - col7 = st.columns(1, gap="small") - - if col7[0].button("More Funny Rhymes 🎙️"): - StreamLLMChatResponse(descriptions["More Funny Rhymes 🎙️"]) - -def SpeechSynthesis(result): - documentHTML5=''' - <!DOCTYPE html> - <html> - <head> - <title>Read It Aloud - - - -

      🔊 Read It Aloud

      - -
      - - - - ''' - - components.html(documentHTML5, width=1280, height=1024) - #return result - - -# 3. Stream Llama Response -# @st.cache_resource -def StreamLLMChatResponse(prompt): - try: - endpoint_url = API_URL - hf_token = API_KEY - client = InferenceClient(endpoint_url, token=hf_token) - gen_kwargs = dict( - max_new_tokens=512, - top_k=30, - top_p=0.9, - temperature=0.2, - repetition_penalty=1.02, - stop_sequences=["\nUser:", "<|endoftext|>", ""], - ) - stream = client.text_generation(prompt, stream=True, details=True, **gen_kwargs) - report=[] - res_box = st.empty() - collected_chunks=[] - collected_messages=[] - allresults='' - for r in stream: - if r.token.special: - continue - if r.token.text in gen_kwargs["stop_sequences"]: - break - collected_chunks.append(r.token.text) - chunk_message = r.token.text - collected_messages.append(chunk_message) - try: - report.append(r.token.text) - if len(r.token.text) > 0: - result="".join(report).strip() - res_box.markdown(f'*{result}*') - - except: - st.write('Stream llm issue') - SpeechSynthesis(result) - return result - except: - st.write('Llama model is asleep. Starting up now on A10 - please give 5 minutes then retry as KEDA scales up from zero to activate running container(s).') - -# 4. Run query with payload -def query(payload): - response = requests.post(API_URL, headers=headers, json=payload) - st.markdown(response.json()) - return response.json() -def get_output(prompt): - return query({"inputs": prompt}) - -# 5. Auto name generated output files from time and content -def generate_filename(prompt, file_type): - central = pytz.timezone('US/Central') - safe_date_time = datetime.now(central).strftime("%m%d_%H%M") - replaced_prompt = prompt.replace(" ", "_").replace("\n", "_") - safe_prompt = "".join(x for x in replaced_prompt if x.isalnum() or x == "_")[:45] - return f"{safe_date_time}_{safe_prompt}.{file_type}" - -# 6. Speech transcription via OpenAI service -def transcribe_audio(openai_key, file_path, model): - openai.api_key = openai_key - OPENAI_API_URL = "https://api.openai.com/v1/audio/transcriptions" - headers = { - "Authorization": f"Bearer {openai_key}", - } - with open(file_path, 'rb') as f: - data = {'file': f} - response = requests.post(OPENAI_API_URL, headers=headers, files=data, data={'model': model}) - if response.status_code == 200: - st.write(response.json()) - chatResponse = chat_with_model(response.json().get('text'), '') # ************************************* - transcript = response.json().get('text') - filename = generate_filename(transcript, 'txt') - response = chatResponse - user_prompt = transcript - create_file(filename, user_prompt, response, should_save) - return transcript - else: - st.write(response.json()) - st.error("Error in API call.") - return None - -# 7. Auto stop on silence audio control for recording WAV files -def save_and_play_audio(audio_recorder): - audio_bytes = audio_recorder(key='audio_recorder') - if audio_bytes: - filename = generate_filename("Recording", "wav") - with open(filename, 'wb') as f: - f.write(audio_bytes) - st.audio(audio_bytes, format="audio/wav") - return filename - return None - -# 8. File creator that interprets type and creates output file for text, markdown and code -def create_file(filename, prompt, response, should_save=True): - if not should_save: - return - base_filename, ext = os.path.splitext(filename) - if ext in ['.txt', '.htm', '.md']: - with open(f"{base_filename}.md", 'w') as file: - try: - content = prompt.strip() + '\r\n' + response - file.write(content) - except: - st.write('.') - - #has_python_code = re.search(r"```python([\s\S]*?)```", prompt.strip() + '\r\n' + response) - #has_python_code = bool(re.search(r"```python([\s\S]*?)```", prompt.strip() + '\r\n' + response)) - #if has_python_code: - # python_code = re.findall(r"```python([\s\S]*?)```", response)[0].strip() - # with open(f"{base_filename}-Code.py", 'w') as file: - # file.write(python_code) - # with open(f"{base_filename}.md", 'w') as file: - # content = prompt.strip() + '\r\n' + response - # file.write(content) - -def truncate_document(document, length): - return document[:length] -def divide_document(document, max_length): - return [document[i:i+max_length] for i in range(0, len(document), max_length)] - -# 9. Sidebar with UI controls to review and re-run prompts and continue responses -@st.cache_resource -def get_table_download_link(file_path): - with open(file_path, 'r') as file: - data = file.read() - - b64 = base64.b64encode(data.encode()).decode() - file_name = os.path.basename(file_path) - ext = os.path.splitext(file_name)[1] # get the file extension - if ext == '.txt': - mime_type = 'text/plain' - elif ext == '.py': - mime_type = 'text/plain' - elif ext == '.xlsx': - mime_type = 'text/plain' - elif ext == '.csv': - mime_type = 'text/plain' - elif ext == '.htm': - mime_type = 'text/html' - elif ext == '.md': - mime_type = 'text/markdown' - else: - mime_type = 'application/octet-stream' # general binary data type - href = f'{file_name}' - return href - - -def CompressXML(xml_text): - root = ET.fromstring(xml_text) - for elem in list(root.iter()): - if isinstance(elem.tag, str) and 'Comment' in elem.tag: - elem.parent.remove(elem) - return ET.tostring(root, encoding='unicode', method="xml") - -# 10. Read in and provide UI for past files -@st.cache_resource -def read_file_content(file,max_length): - if file.type == "application/json": - content = json.load(file) - return str(content) - elif file.type == "text/html" or file.type == "text/htm": - content = BeautifulSoup(file, "html.parser") - return content.text - elif file.type == "application/xml" or file.type == "text/xml": - tree = ET.parse(file) - root = tree.getroot() - xml = CompressXML(ET.tostring(root, encoding='unicode')) - return xml - elif file.type == "text/markdown" or file.type == "text/md": - md = mistune.create_markdown() - content = md(file.read().decode()) - return content - elif file.type == "text/plain": - return file.getvalue().decode() - else: - return "" - -# 11. Chat with GPT - Caution on quota - now favoring fastest AI pipeline STT Whisper->LLM Llama->TTS -@st.cache_resource -def chat_with_model(prompt, document_section, model_choice='gpt-3.5-turbo'): - model = model_choice - conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}] - conversation.append({'role': 'user', 'content': prompt}) - if len(document_section)>0: - conversation.append({'role': 'assistant', 'content': document_section}) - start_time = time.time() - report = [] - res_box = st.empty() - collected_chunks = [] - collected_messages = [] - for chunk in openai.ChatCompletion.create(model='gpt-3.5-turbo', messages=conversation, temperature=0.5, stream=True): - collected_chunks.append(chunk) - chunk_message = chunk['choices'][0]['delta'] - collected_messages.append(chunk_message) - content=chunk["choices"][0].get("delta",{}).get("content") - try: - report.append(content) - if len(content) > 0: - result = "".join(report).strip() - res_box.markdown(f'*{result}*') - except: - st.write(' ') - full_reply_content = ''.join([m.get('content', '') for m in collected_messages]) - st.write("Elapsed time:") - st.write(time.time() - start_time) - return full_reply_content - -# 12. Embedding VectorDB for LLM query of documents to text to compress inputs and prompt together as Chat memory using Langchain -@st.cache_resource -def chat_with_file_contents(prompt, file_content, model_choice='gpt-3.5-turbo'): - conversation = [{'role': 'system', 'content': 'You are a helpful assistant.'}] - conversation.append({'role': 'user', 'content': prompt}) - if len(file_content)>0: - conversation.append({'role': 'assistant', 'content': file_content}) - response = openai.ChatCompletion.create(model=model_choice, messages=conversation) - return response['choices'][0]['message']['content'] - -def extract_mime_type(file): - if isinstance(file, str): - pattern = r"type='(.*?)'" - match = re.search(pattern, file) - if match: - return match.group(1) - else: - raise ValueError(f"Unable to extract MIME type from {file}") - elif isinstance(file, streamlit.UploadedFile): - return file.type - else: - raise TypeError("Input should be a string or a streamlit.UploadedFile object") - -def extract_file_extension(file): - # get the file name directly from the UploadedFile object - file_name = file.name - pattern = r".*?\.(.*?)$" - match = re.search(pattern, file_name) - if match: - return match.group(1) - else: - raise ValueError(f"Unable to extract file extension from {file_name}") - -# Normalize input as text from PDF and other formats -@st.cache_resource -def pdf2txt(docs): - text = "" - for file in docs: - file_extension = extract_file_extension(file) - st.write(f"File type extension: {file_extension}") - if file_extension.lower() in ['py', 'txt', 'html', 'htm', 'xml', 'json']: - text += file.getvalue().decode('utf-8') - elif file_extension.lower() == 'pdf': - from PyPDF2 import PdfReader - pdf = PdfReader(BytesIO(file.getvalue())) - for page in range(len(pdf.pages)): - text += pdf.pages[page].extract_text() # new PyPDF2 syntax - return text - -def txt2chunks(text): - text_splitter = CharacterTextSplitter(separator="\n", chunk_size=1000, chunk_overlap=200, length_function=len) - return text_splitter.split_text(text) - -# Vector Store using FAISS -@st.cache_resource -def vector_store(text_chunks): - embeddings = OpenAIEmbeddings(openai_api_key=key) - return FAISS.from_texts(texts=text_chunks, embedding=embeddings) - -# Memory and Retrieval chains -@st.cache_resource -def get_chain(vectorstore): - llm = ChatOpenAI() - memory = ConversationBufferMemory(memory_key='chat_history', return_messages=True) - return ConversationalRetrievalChain.from_llm(llm=llm, retriever=vectorstore.as_retriever(), memory=memory) - -def process_user_input(user_question): - response = st.session_state.conversation({'question': user_question}) - st.session_state.chat_history = response['chat_history'] - for i, message in enumerate(st.session_state.chat_history): - template = user_template if i % 2 == 0 else bot_template - st.write(template.replace("{{MSG}}", message.content), unsafe_allow_html=True) - filename = generate_filename(user_question, 'txt') - response = message.content - user_prompt = user_question - create_file(filename, user_prompt, response, should_save) - -def divide_prompt(prompt, max_length): - words = prompt.split() - chunks = [] - current_chunk = [] - current_length = 0 - for word in words: - if len(word) + current_length <= max_length: - current_length += len(word) + 1 - current_chunk.append(word) - else: - chunks.append(' '.join(current_chunk)) - current_chunk = [word] - current_length = len(word) - chunks.append(' '.join(current_chunk)) - return chunks - - -# 13. Provide way of saving all and deleting all to give way of reviewing output and saving locally before clearing it - -@st.cache_resource -def create_zip_of_files(files): - zip_name = "all_files.zip" - with zipfile.ZipFile(zip_name, 'w') as zipf: - for file in files: - zipf.write(file) - return zip_name - -@st.cache_resource -def get_zip_download_link(zip_file): - with open(zip_file, 'rb') as f: - data = f.read() - b64 = base64.b64encode(data).decode() - href = f'Download All' - return href - -# 14. Inference Endpoints for Whisper (best fastest STT) on NVIDIA T4 and Llama (best fastest AGI LLM) on NVIDIA A10 -# My Inference Endpoint -API_URL_IE = f'https://tonpixzfvq3791u9.us-east-1.aws.endpoints.huggingface.cloud' -# Original -API_URL_IE = "https://api-inference.huggingface.co/models/openai/whisper-small.en" -MODEL2 = "openai/whisper-small.en" -MODEL2_URL = "https://huggingface.co/openai/whisper-small.en" -#headers = { -# "Authorization": "Bearer XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX", -# "Content-Type": "audio/wav" -#} -HF_KEY = os.getenv('HF_KEY') -headers = { - "Authorization": f"Bearer {HF_KEY}", - "Content-Type": "audio/wav" -} - -#@st.cache_resource -def query(filename): - with open(filename, "rb") as f: - data = f.read() - response = requests.post(API_URL_IE, headers=headers, data=data) - return response.json() - -def generate_filename(prompt, file_type): - central = pytz.timezone('US/Central') - safe_date_time = datetime.now(central).strftime("%m%d_%H%M") - replaced_prompt = prompt.replace(" ", "_").replace("\n", "_") - safe_prompt = "".join(x for x in replaced_prompt if x.isalnum() or x == "_")[:90] - return f"{safe_date_time}_{safe_prompt}.{file_type}" - -# 15. Audio recorder to Wav file -def save_and_play_audio(audio_recorder): - audio_bytes = audio_recorder() - if audio_bytes: - filename = generate_filename("Recording", "wav") - with open(filename, 'wb') as f: - f.write(audio_bytes) - st.audio(audio_bytes, format="audio/wav") - return filename - -# 16. Speech transcription to file output -def transcribe_audio(filename): - output = query(filename) - return output - -def whisper_main(): - st.title("Speech to Text") - st.write("Record your speech and get the text.") - - # Audio, transcribe, GPT: - filename = save_and_play_audio(audio_recorder) - if filename is not None: - transcription = transcribe_audio(filename) - try: - transcription = transcription['text'] - except: - st.write('Whisper model is asleep. Starting up now on T4 GPU - please give 5 minutes then retry as it scales up from zero to activate running container(s).') - - st.write(transcription) - response = StreamLLMChatResponse(transcription) - # st.write(response) - redundant with streaming result? - filename = generate_filename(transcription, ".txt") - create_file(filename, transcription, response, should_save) - #st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - - -# 17. Main -def main(): - - st.title("AI Drome Llama") - prompt = f"Write ten funny jokes that are tweet length stories that make you laugh. Show as markdown outline with emojis for each." - - # Add Wit and Humor buttons - add_witty_humor_buttons() - - example_input = st.text_input("Enter your example text:", value=prompt, help="Enter text to get a response from DromeLlama.") - if st.button("Run Prompt With DromeLlama", help="Click to run the prompt."): - try: - StreamLLMChatResponse(example_input) - except: - st.write('DromeLlama is asleep. Starting up now on A10 - please give 5 minutes then retry as KEDA scales up from zero to activate running container(s).') - - openai.api_key = os.getenv('OPENAI_KEY') - menu = ["txt", "htm", "xlsx", "csv", "md", "py"] - choice = st.sidebar.selectbox("Output File Type:", menu) - model_choice = st.sidebar.radio("Select Model:", ('gpt-3.5-turbo', 'gpt-3.5-turbo-0301')) - user_prompt = st.text_area("Enter prompts, instructions & questions:", '', height=100) - collength, colupload = st.columns([2,3]) # adjust the ratio as needed - with collength: - max_length = st.slider("File section length for large files", min_value=1000, max_value=128000, value=12000, step=1000) - with colupload: - uploaded_file = st.file_uploader("Add a file for context:", type=["pdf", "xml", "json", "xlsx", "csv", "html", "htm", "md", "txt"]) - document_sections = deque() - document_responses = {} - if uploaded_file is not None: - file_content = read_file_content(uploaded_file, max_length) - document_sections.extend(divide_document(file_content, max_length)) - if len(document_sections) > 0: - if st.button("👁️ View Upload"): - st.markdown("**Sections of the uploaded file:**") - for i, section in enumerate(list(document_sections)): - st.markdown(f"**Section {i+1}**\n{section}") - st.markdown("**Chat with the model:**") - for i, section in enumerate(list(document_sections)): - if i in document_responses: - st.markdown(f"**Section {i+1}**\n{document_responses[i]}") - else: - if st.button(f"Chat about Section {i+1}"): - st.write('Reasoning with your inputs...') - response = chat_with_model(user_prompt, section, model_choice) - st.write('Response:') - st.write(response) - document_responses[i] = response - filename = generate_filename(f"{user_prompt}_section_{i+1}", choice) - create_file(filename, user_prompt, response, should_save) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - if st.button('💬 Chat'): - st.write('Reasoning with your inputs...') - user_prompt_sections = divide_prompt(user_prompt, max_length) - full_response = '' - for prompt_section in user_prompt_sections: - response = chat_with_model(prompt_section, ''.join(list(document_sections)), model_choice) - full_response += response + '\n' # Combine the responses - response = full_response - st.write('Response:') - st.write(response) - filename = generate_filename(user_prompt, choice) - create_file(filename, user_prompt, response, should_save) - st.sidebar.markdown(get_table_download_link(filename), unsafe_allow_html=True) - - # Compose a file sidebar of past encounters - all_files = glob.glob("*.*") - all_files = [file for file in all_files if len(os.path.splitext(file)[0]) >= 20] # exclude files with short names - all_files.sort(key=lambda x: (os.path.splitext(x)[1], x), reverse=True) # sort by file type and file name in descending order - if st.sidebar.button("🗑 Delete All"): - for file in all_files: - os.remove(file) - st.experimental_rerun() - if st.sidebar.button("⬇️ Download All"): - zip_file = create_zip_of_files(all_files) - st.sidebar.markdown(get_zip_download_link(zip_file), unsafe_allow_html=True) - file_contents='' - next_action='' - for file in all_files: - col1, col2, col3, col4, col5 = st.sidebar.columns([1,6,1,1,1]) # adjust the ratio as needed - with col1: - if st.button("🌐", key="md_"+file): # md emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='md' - with col2: - st.markdown(get_table_download_link(file), unsafe_allow_html=True) - with col3: - if st.button("📂", key="open_"+file): # open emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='open' - with col4: - if st.button("🔍", key="read_"+file): # search emoji button - with open(file, 'r') as f: - file_contents = f.read() - next_action='search' - with col5: - if st.button("🗑", key="delete_"+file): - os.remove(file) - st.experimental_rerun() - - - if len(file_contents) > 0: - if next_action=='open': - file_content_area = st.text_area("File Contents:", file_contents, height=500) - if next_action=='md': - st.markdown(file_contents) - if next_action=='search': - file_content_area = st.text_area("File Contents:", file_contents, height=500) - st.write('Reasoning with your inputs...') - - # new - llama - response = StreamLLMChatResponse(file_contents) - filename = generate_filename(user_prompt, ".md") - create_file(filename, file_contents, response, should_save) - SpeechSynthesis(response) - - # old - gpt - #response = chat_with_model(user_prompt, file_contents, model_choice) - #filename = generate_filename(file_contents, choice) - #create_file(filename, user_prompt, response, should_save) - - st.experimental_rerun() - - # Feedback - # Step: Give User a Way to Upvote or Downvote - feedback = st.radio("Step 8: Give your feedback", ("👍 Upvote", "👎 Downvote")) - if feedback == "👍 Upvote": - st.write("You upvoted 👍. Thank you for your feedback!") - else: - st.write("You downvoted 👎. Thank you for your feedback!") - - load_dotenv() - st.write(css, unsafe_allow_html=True) - st.header("Chat with documents :books:") - user_question = st.text_input("Ask a question about your documents:") - if user_question: - process_user_input(user_question) - with st.sidebar: - st.subheader("Your documents") - docs = st.file_uploader("import documents", accept_multiple_files=True) - with st.spinner("Processing"): - raw = pdf2txt(docs) - if len(raw) > 0: - length = str(len(raw)) - text_chunks = txt2chunks(raw) - vectorstore = vector_store(text_chunks) - st.session_state.conversation = get_chain(vectorstore) - st.markdown('# AI Search Index of Length:' + length + ' Created.') # add timing - filename = generate_filename(raw, 'txt') - create_file(filename, raw, '', should_save) - -# 18. Run AI Pipeline -if __name__ == "__main__": - whisper_main() - main() \ No newline at end of file diff --git a/spaces/azusarang/so-vits-svc-models-ba_P/cluster/train_cluster.py b/spaces/azusarang/so-vits-svc-models-ba_P/cluster/train_cluster.py deleted file mode 100644 index 8644566388a4107c4442da14c0de090bcd4a91b8..0000000000000000000000000000000000000000 --- a/spaces/azusarang/so-vits-svc-models-ba_P/cluster/train_cluster.py +++ /dev/null @@ -1,84 +0,0 @@ -import time,pdb -import tqdm -from time import time as ttime -import os -from pathlib import Path -import logging -import argparse -from kmeans import KMeansGPU -import torch -import numpy as np -from sklearn.cluster import KMeans,MiniBatchKMeans - -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) -from time import time as ttime -import pynvml,torch - -def train_cluster(in_dir, n_clusters, use_minibatch=True, verbose=False,use_gpu=False):#gpu_minibatch真拉,虽然库支持但是也不考虑 - logger.info(f"Loading features from {in_dir}") - features = [] - nums = 0 - for path in tqdm.tqdm(in_dir.glob("*.soft.pt")): - # for name in os.listdir(in_dir): - # path="%s/%s"%(in_dir,name) - features.append(torch.load(path,map_location="cpu").squeeze(0).numpy().T) - # print(features[-1].shape) - features = np.concatenate(features, axis=0) - print(nums, features.nbytes/ 1024**2, "MB , shape:",features.shape, features.dtype) - features = features.astype(np.float32) - logger.info(f"Clustering features of shape: {features.shape}") - t = time.time() - if(use_gpu==False): - if use_minibatch: - kmeans = MiniBatchKMeans(n_clusters=n_clusters,verbose=verbose, batch_size=4096, max_iter=80).fit(features) - else: - kmeans = KMeans(n_clusters=n_clusters,verbose=verbose).fit(features) - else: - kmeans = KMeansGPU(n_clusters=n_clusters, mode='euclidean', verbose=2 if verbose else 0,max_iter=500,tol=1e-2)# - features=torch.from_numpy(features)#.to(device) - labels = kmeans.fit_predict(features)# - - print(time.time()-t, "s") - - x = { - "n_features_in_": kmeans.n_features_in_ if use_gpu==False else features.shape[1], - "_n_threads": kmeans._n_threads if use_gpu==False else 4, - "cluster_centers_": kmeans.cluster_centers_ if use_gpu==False else kmeans.centroids.cpu().numpy(), - } - print("end") - - return x - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument('--dataset', type=Path, default="./dataset/44k", - help='path of training data directory') - parser.add_argument('--output', type=Path, default="logs/44k", - help='path of model output directory') - parser.add_argument('--gpu',action='store_true', default=False , - help='to use GPU') - - - args = parser.parse_args() - - checkpoint_dir = args.output - dataset = args.dataset - use_gpu = args.gpu - n_clusters = 10000 - - ckpt = {} - for spk in os.listdir(dataset): - if os.path.isdir(dataset/spk): - print(f"train kmeans for {spk}...") - in_dir = dataset/spk - x = train_cluster(in_dir, n_clusters,use_minibatch=False,verbose=False,use_gpu=use_gpu) - ckpt[spk] = x - - checkpoint_path = checkpoint_dir / f"kmeans_{n_clusters}.pt" - checkpoint_path.parent.mkdir(exist_ok=True, parents=True) - torch.save( - ckpt, - checkpoint_path, - ) - diff --git a/spaces/balacoon/revoice/__init__.py b/spaces/balacoon/revoice/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/PLYLoader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/PLYLoader.js deleted file mode 100644 index b0efd85b31270302a5da255caca1f32b2d872750..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/PLYLoader.js +++ /dev/null @@ -1,505 +0,0 @@ -/** - * @author Wei Meng / http://about.me/menway - * - * Description: A THREE loader for PLY ASCII files (known as the Polygon - * File Format or the Stanford Triangle Format). - * - * Limitations: ASCII decoding assumes file is UTF-8. - * - * Usage: - * var loader = new THREE.PLYLoader(); - * loader.load('./models/ply/ascii/dolphins.ply', function (geometry) { - * - * scene.add( new THREE.Mesh( geometry ) ); - * - * } ); - * - * If the PLY file uses non standard property names, they can be mapped while - * loading. For example, the following maps the properties - * “diffuse_(red|green|blue)” in the file to standard color names. - * - * loader.setPropertyNameMapping( { - * diffuse_red: 'red', - * diffuse_green: 'green', - * diffuse_blue: 'blue' - * } ); - * - */ - - -THREE.PLYLoader = function ( manager ) { - - this.manager = ( manager !== undefined ) ? manager : THREE.DefaultLoadingManager; - - this.propertyNameMapping = {}; - -}; - -THREE.PLYLoader.prototype = { - - constructor: THREE.PLYLoader, - - load: function ( url, onLoad, onProgress, onError ) { - - var scope = this; - - var loader = new THREE.FileLoader( this.manager ); - loader.setPath( this.path ); - loader.setResponseType( 'arraybuffer' ); - loader.load( url, function ( text ) { - - onLoad( scope.parse( text ) ); - - }, onProgress, onError ); - - }, - - setPath: function ( value ) { - - this.path = value; - return this; - - }, - - setPropertyNameMapping: function ( mapping ) { - - this.propertyNameMapping = mapping; - - }, - - parse: function ( data ) { - - function parseHeader( data ) { - - var patternHeader = /ply([\s\S]*)end_header\r?\n/; - var headerText = ''; - var headerLength = 0; - var result = patternHeader.exec( data ); - - if ( result !== null ) { - - headerText = result[ 1 ]; - headerLength = result[ 0 ].length; - - } - - var header = { - comments: [], - elements: [], - headerLength: headerLength - }; - - var lines = headerText.split( '\n' ); - var currentElement; - var lineType, lineValues; - - function make_ply_element_property( propertValues, propertyNameMapping ) { - - var property = { type: propertValues[ 0 ] }; - - if ( property.type === 'list' ) { - - property.name = propertValues[ 3 ]; - property.countType = propertValues[ 1 ]; - property.itemType = propertValues[ 2 ]; - - } else { - - property.name = propertValues[ 1 ]; - - } - - if ( property.name in propertyNameMapping ) { - - property.name = propertyNameMapping[ property.name ]; - - } - - return property; - - } - - for ( var i = 0; i < lines.length; i ++ ) { - - var line = lines[ i ]; - line = line.trim(); - - if ( line === '' ) continue; - - lineValues = line.split( /\s+/ ); - lineType = lineValues.shift(); - line = lineValues.join( ' ' ); - - switch ( lineType ) { - - case 'format': - - header.format = lineValues[ 0 ]; - header.version = lineValues[ 1 ]; - - break; - - case 'comment': - - header.comments.push( line ); - - break; - - case 'element': - - if ( currentElement !== undefined ) { - - header.elements.push( currentElement ); - - } - - currentElement = {}; - currentElement.name = lineValues[ 0 ]; - currentElement.count = parseInt( lineValues[ 1 ] ); - currentElement.properties = []; - - break; - - case 'property': - - currentElement.properties.push( make_ply_element_property( lineValues, scope.propertyNameMapping ) ); - - break; - - - default: - - console.log( 'unhandled', lineType, lineValues ); - - } - - } - - if ( currentElement !== undefined ) { - - header.elements.push( currentElement ); - - } - - return header; - - } - - function parseASCIINumber( n, type ) { - - switch ( type ) { - - case 'char': case 'uchar': case 'short': case 'ushort': case 'int': case 'uint': - case 'int8': case 'uint8': case 'int16': case 'uint16': case 'int32': case 'uint32': - - return parseInt( n ); - - case 'float': case 'double': case 'float32': case 'float64': - - return parseFloat( n ); - - } - - } - - function parseASCIIElement( properties, line ) { - - var values = line.split( /\s+/ ); - - var element = {}; - - for ( var i = 0; i < properties.length; i ++ ) { - - if ( properties[ i ].type === 'list' ) { - - var list = []; - var n = parseASCIINumber( values.shift(), properties[ i ].countType ); - - for ( var j = 0; j < n; j ++ ) { - - list.push( parseASCIINumber( values.shift(), properties[ i ].itemType ) ); - - } - - element[ properties[ i ].name ] = list; - - } else { - - element[ properties[ i ].name ] = parseASCIINumber( values.shift(), properties[ i ].type ); - - } - - } - - return element; - - } - - function parseASCII( data, header ) { - - // PLY ascii format specification, as per http://en.wikipedia.org/wiki/PLY_(file_format) - - var buffer = { - indices: [], - vertices: [], - normals: [], - uvs: [], - faceVertexUvs: [], - colors: [] - }; - - var result; - - var patternBody = /end_header\s([\s\S]*)$/; - var body = ''; - if ( ( result = patternBody.exec( data ) ) !== null ) { - - body = result[ 1 ]; - - } - - var lines = body.split( '\n' ); - var currentElement = 0; - var currentElementCount = 0; - - for ( var i = 0; i < lines.length; i ++ ) { - - var line = lines[ i ]; - line = line.trim(); - if ( line === '' ) { - - continue; - - } - - if ( currentElementCount >= header.elements[ currentElement ].count ) { - - currentElement ++; - currentElementCount = 0; - - } - - var element = parseASCIIElement( header.elements[ currentElement ].properties, line ); - - handleElement( buffer, header.elements[ currentElement ].name, element ); - - currentElementCount ++; - - } - - return postProcess( buffer ); - - } - - function postProcess( buffer ) { - - var geometry = new THREE.BufferGeometry(); - - // mandatory buffer data - - if ( buffer.indices.length > 0 ) { - - geometry.setIndex( buffer.indices ); - - } - - geometry.addAttribute( 'position', new THREE.Float32BufferAttribute( buffer.vertices, 3 ) ); - - // optional buffer data - - if ( buffer.normals.length > 0 ) { - - geometry.addAttribute( 'normal', new THREE.Float32BufferAttribute( buffer.normals, 3 ) ); - - } - - if ( buffer.uvs.length > 0 ) { - - geometry.addAttribute( 'uv', new THREE.Float32BufferAttribute( buffer.uvs, 2 ) ); - - } - - if ( buffer.colors.length > 0 ) { - - geometry.addAttribute( 'color', new THREE.Float32BufferAttribute( buffer.colors, 3 ) ); - - } - - if ( buffer.faceVertexUvs.length > 0 ) { - - geometry = geometry.toNonIndexed(); - geometry.addAttribute( 'uv', new THREE.Float32BufferAttribute( buffer.faceVertexUvs, 2 ) ); - - } - - geometry.computeBoundingSphere(); - - return geometry; - - } - - function handleElement( buffer, elementName, element ) { - - if ( elementName === 'vertex' ) { - - buffer.vertices.push( element.x, element.y, element.z ); - - if ( 'nx' in element && 'ny' in element && 'nz' in element ) { - - buffer.normals.push( element.nx, element.ny, element.nz ); - - } - - if ( 's' in element && 't' in element ) { - - buffer.uvs.push( element.s, element.t ); - - } - - if ( 'red' in element && 'green' in element && 'blue' in element ) { - - buffer.colors.push( element.red / 255.0, element.green / 255.0, element.blue / 255.0 ); - - } - - } else if ( elementName === 'face' ) { - - var vertex_indices = element.vertex_indices || element.vertex_index; // issue #9338 - var texcoord = element.texcoord; - - if ( vertex_indices.length === 3 ) { - - buffer.indices.push( vertex_indices[ 0 ], vertex_indices[ 1 ], vertex_indices[ 2 ] ); - - if ( texcoord && texcoord.length === 6 ) { - - buffer.faceVertexUvs.push( texcoord[ 0 ], texcoord[ 1 ] ); - buffer.faceVertexUvs.push( texcoord[ 2 ], texcoord[ 3 ] ); - buffer.faceVertexUvs.push( texcoord[ 4 ], texcoord[ 5 ] ); - - } - - } else if ( vertex_indices.length === 4 ) { - - buffer.indices.push( vertex_indices[ 0 ], vertex_indices[ 1 ], vertex_indices[ 3 ] ); - buffer.indices.push( vertex_indices[ 1 ], vertex_indices[ 2 ], vertex_indices[ 3 ] ); - - } - - } - - } - - function binaryRead( dataview, at, type, little_endian ) { - - switch ( type ) { - - // corespondences for non-specific length types here match rply: - case 'int8': case 'char': return [ dataview.getInt8( at ), 1 ]; - case 'uint8': case 'uchar': return [ dataview.getUint8( at ), 1 ]; - case 'int16': case 'short': return [ dataview.getInt16( at, little_endian ), 2 ]; - case 'uint16': case 'ushort': return [ dataview.getUint16( at, little_endian ), 2 ]; - case 'int32': case 'int': return [ dataview.getInt32( at, little_endian ), 4 ]; - case 'uint32': case 'uint': return [ dataview.getUint32( at, little_endian ), 4 ]; - case 'float32': case 'float': return [ dataview.getFloat32( at, little_endian ), 4 ]; - case 'float64': case 'double': return [ dataview.getFloat64( at, little_endian ), 8 ]; - - } - - } - - function binaryReadElement( dataview, at, properties, little_endian ) { - - var element = {}; - var result, read = 0; - - for ( var i = 0; i < properties.length; i ++ ) { - - if ( properties[ i ].type === 'list' ) { - - var list = []; - - result = binaryRead( dataview, at + read, properties[ i ].countType, little_endian ); - var n = result[ 0 ]; - read += result[ 1 ]; - - for ( var j = 0; j < n; j ++ ) { - - result = binaryRead( dataview, at + read, properties[ i ].itemType, little_endian ); - list.push( result[ 0 ] ); - read += result[ 1 ]; - - } - - element[ properties[ i ].name ] = list; - - } else { - - result = binaryRead( dataview, at + read, properties[ i ].type, little_endian ); - element[ properties[ i ].name ] = result[ 0 ]; - read += result[ 1 ]; - - } - - } - - return [ element, read ]; - - } - - function parseBinary( data, header ) { - - var buffer = { - indices: [], - vertices: [], - normals: [], - uvs: [], - faceVertexUvs: [], - colors: [] - }; - - var little_endian = ( header.format === 'binary_little_endian' ); - var body = new DataView( data, header.headerLength ); - var result, loc = 0; - - for ( var currentElement = 0; currentElement < header.elements.length; currentElement ++ ) { - - for ( var currentElementCount = 0; currentElementCount < header.elements[ currentElement ].count; currentElementCount ++ ) { - - result = binaryReadElement( body, loc, header.elements[ currentElement ].properties, little_endian ); - loc += result[ 1 ]; - var element = result[ 0 ]; - - handleElement( buffer, header.elements[ currentElement ].name, element ); - - } - - } - - return postProcess( buffer ); - - } - - // - - var geometry; - var scope = this; - - if ( data instanceof ArrayBuffer ) { - - var text = THREE.LoaderUtils.decodeText( new Uint8Array( data ) ); - var header = parseHeader( text ); - - geometry = header.format === 'ascii' ? parseASCII( text, header ) : parseBinary( data, header ); - - } else { - - geometry = parseASCII( data, parseHeader( data ) ); - - } - - return geometry; - - } - -}; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/audio/Audio.js b/spaces/banana-projects/web3d/node_modules/three/src/audio/Audio.js deleted file mode 100644 index 35b72753543eb41cbf63f69f3f2e85ddb085f41f..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/audio/Audio.js +++ /dev/null @@ -1,345 +0,0 @@ -/** - * @author mrdoob / http://mrdoob.com/ - * @author Reece Aaron Lecrivain / http://reecenotes.com/ - */ - -import { Object3D } from '../core/Object3D.js'; - -function Audio( listener ) { - - Object3D.call( this ); - - this.type = 'Audio'; - - this.listener = listener; - this.context = listener.context; - - this.gain = this.context.createGain(); - this.gain.connect( listener.getInput() ); - - this.autoplay = false; - - this.buffer = null; - this.detune = 0; - this.loop = false; - this.startTime = 0; - this.offset = 0; - this.playbackRate = 1; - this.isPlaying = false; - this.hasPlaybackControl = true; - this.sourceType = 'empty'; - - this.filters = []; - -} - -Audio.prototype = Object.assign( Object.create( Object3D.prototype ), { - - constructor: Audio, - - getOutput: function () { - - return this.gain; - - }, - - setNodeSource: function ( audioNode ) { - - this.hasPlaybackControl = false; - this.sourceType = 'audioNode'; - this.source = audioNode; - this.connect(); - - return this; - - }, - - setMediaElementSource: function ( mediaElement ) { - - this.hasPlaybackControl = false; - this.sourceType = 'mediaNode'; - this.source = this.context.createMediaElementSource( mediaElement ); - this.connect(); - - return this; - - }, - - setBuffer: function ( audioBuffer ) { - - this.buffer = audioBuffer; - this.sourceType = 'buffer'; - - if ( this.autoplay ) this.play(); - - return this; - - }, - - play: function () { - - if ( this.isPlaying === true ) { - - console.warn( 'THREE.Audio: Audio is already playing.' ); - return; - - } - - if ( this.hasPlaybackControl === false ) { - - console.warn( 'THREE.Audio: this Audio has no playback control.' ); - return; - - } - - var source = this.context.createBufferSource(); - - source.buffer = this.buffer; - source.loop = this.loop; - source.onended = this.onEnded.bind( this ); - this.startTime = this.context.currentTime; - source.start( this.startTime, this.offset ); - - this.isPlaying = true; - - this.source = source; - - this.setDetune( this.detune ); - this.setPlaybackRate( this.playbackRate ); - - return this.connect(); - - }, - - pause: function () { - - if ( this.hasPlaybackControl === false ) { - - console.warn( 'THREE.Audio: this Audio has no playback control.' ); - return; - - } - - if ( this.isPlaying === true ) { - - this.source.stop(); - this.source.onended = null; - this.offset += ( this.context.currentTime - this.startTime ) * this.playbackRate; - this.isPlaying = false; - - } - - return this; - - }, - - stop: function () { - - if ( this.hasPlaybackControl === false ) { - - console.warn( 'THREE.Audio: this Audio has no playback control.' ); - return; - - } - - this.source.stop(); - this.source.onended = null; - this.offset = 0; - this.isPlaying = false; - - return this; - - }, - - connect: function () { - - if ( this.filters.length > 0 ) { - - this.source.connect( this.filters[ 0 ] ); - - for ( var i = 1, l = this.filters.length; i < l; i ++ ) { - - this.filters[ i - 1 ].connect( this.filters[ i ] ); - - } - - this.filters[ this.filters.length - 1 ].connect( this.getOutput() ); - - } else { - - this.source.connect( this.getOutput() ); - - } - - return this; - - }, - - disconnect: function () { - - if ( this.filters.length > 0 ) { - - this.source.disconnect( this.filters[ 0 ] ); - - for ( var i = 1, l = this.filters.length; i < l; i ++ ) { - - this.filters[ i - 1 ].disconnect( this.filters[ i ] ); - - } - - this.filters[ this.filters.length - 1 ].disconnect( this.getOutput() ); - - } else { - - this.source.disconnect( this.getOutput() ); - - } - - return this; - - }, - - getFilters: function () { - - return this.filters; - - }, - - setFilters: function ( value ) { - - if ( ! value ) value = []; - - if ( this.isPlaying === true ) { - - this.disconnect(); - this.filters = value; - this.connect(); - - } else { - - this.filters = value; - - } - - return this; - - }, - - setDetune: function ( value ) { - - this.detune = value; - - if ( this.source.detune === undefined ) return; // only set detune when available - - if ( this.isPlaying === true ) { - - this.source.detune.setTargetAtTime( this.detune, this.context.currentTime, 0.01 ); - - } - - return this; - - }, - - getDetune: function () { - - return this.detune; - - }, - - getFilter: function () { - - return this.getFilters()[ 0 ]; - - }, - - setFilter: function ( filter ) { - - return this.setFilters( filter ? [ filter ] : [] ); - - }, - - setPlaybackRate: function ( value ) { - - if ( this.hasPlaybackControl === false ) { - - console.warn( 'THREE.Audio: this Audio has no playback control.' ); - return; - - } - - this.playbackRate = value; - - if ( this.isPlaying === true ) { - - this.source.playbackRate.setTargetAtTime( this.playbackRate, this.context.currentTime, 0.01 ); - - } - - return this; - - }, - - getPlaybackRate: function () { - - return this.playbackRate; - - }, - - onEnded: function () { - - this.isPlaying = false; - - }, - - getLoop: function () { - - if ( this.hasPlaybackControl === false ) { - - console.warn( 'THREE.Audio: this Audio has no playback control.' ); - return false; - - } - - return this.loop; - - }, - - setLoop: function ( value ) { - - if ( this.hasPlaybackControl === false ) { - - console.warn( 'THREE.Audio: this Audio has no playback control.' ); - return; - - } - - this.loop = value; - - if ( this.isPlaying === true ) { - - this.source.loop = this.loop; - - } - - return this; - - }, - - getVolume: function () { - - return this.gain.gain.value; - - }, - - setVolume: function ( value ) { - - this.gain.gain.setTargetAtTime( value, this.context.currentTime, 0.01 ); - - return this; - - } - -} ); - -export { Audio }; diff --git a/spaces/bastiendechamps/geoguessr-bot/geoguessr_bot/classifier/__init__.py b/spaces/bastiendechamps/geoguessr-bot/geoguessr_bot/classifier/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327001641.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327001641.py deleted file mode 100644 index 4a385df5c29fbaa0336f616d58162be93228e501..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327001641.py +++ /dev/null @@ -1,65 +0,0 @@ -import os -#os.system("pip install gfpgan") - -#os.system("pip freeze") -#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .") -import random -import gradio as gr -from PIL import Image -import torch -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg') - - - - -import cv2 -import glob -import numpy as np -from basicsr.utils import imwrite -from gfpgan import GFPGANer - -import warnings -warnings.warn('The unoptimized RealESRGAN is very slow on CPU. We do not use it. ' - 'If you really want to use it, please modify the corresponding codes.') -bg_upsampler = None - - - -# set up GFPGAN restorer -restorer = GFPGANer( - model_path='experiments/pretrained_models/GFPGANv1.3.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=bg_upsampler) - - -def inference(img): - input_img = cv2.imread(img, cv2.IMREAD_COLOR) - cropped_faces, restored_faces, restored_img = restorer.enhance( - input_img, has_aligned=False, only_center_face=False, paste_back=True) - - return Image.fromarray(restored_img) - -title = "GFP-GAN" -description = "Gradio demo for GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please click submit only once" -article = "

      Towards Real-World Blind Face Restoration with Generative Facial Prior | Github Repo

      visitor badge
      " -gr.Interface( - inference, - [gr.inputs.Image(type="filepath", label="Input")], - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['lincoln.jpg'], - ['einstein.png'], - ['edison.jpg'], - ['Henry.jpg'], - ['Frida.jpg'] - ] - ).launch(enable_queue=True,cache_examples=True) \ No newline at end of file diff --git a/spaces/bigjoker/stable-diffusion-webui/modules/gfpgan_model.py b/spaces/bigjoker/stable-diffusion-webui/modules/gfpgan_model.py deleted file mode 100644 index bc0c5f738e086225505af9738862fde4eecfa4a9..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/modules/gfpgan_model.py +++ /dev/null @@ -1,116 +0,0 @@ -import os -import sys -import traceback - -import facexlib -import gfpgan - -import modules.face_restoration -from modules import paths, shared, devices, modelloader - -model_dir = "GFPGAN" -user_path = None -model_path = os.path.join(paths.models_path, model_dir) -model_url = "https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth" -have_gfpgan = False -loaded_gfpgan_model = None - - -def gfpgann(): - global loaded_gfpgan_model - global model_path - if loaded_gfpgan_model is not None: - loaded_gfpgan_model.gfpgan.to(devices.device_gfpgan) - return loaded_gfpgan_model - - if gfpgan_constructor is None: - return None - - models = modelloader.load_models(model_path, model_url, user_path, ext_filter="GFPGAN") - if len(models) == 1 and "http" in models[0]: - model_file = models[0] - elif len(models) != 0: - latest_file = max(models, key=os.path.getctime) - model_file = latest_file - else: - print("Unable to load gfpgan model!") - return None - if hasattr(facexlib.detection.retinaface, 'device'): - facexlib.detection.retinaface.device = devices.device_gfpgan - model = gfpgan_constructor(model_path=model_file, upscale=1, arch='clean', channel_multiplier=2, bg_upsampler=None, device=devices.device_gfpgan) - loaded_gfpgan_model = model - - return model - - -def send_model_to(model, device): - model.gfpgan.to(device) - model.face_helper.face_det.to(device) - model.face_helper.face_parse.to(device) - - -def gfpgan_fix_faces(np_image): - model = gfpgann() - if model is None: - return np_image - - send_model_to(model, devices.device_gfpgan) - - np_image_bgr = np_image[:, :, ::-1] - cropped_faces, restored_faces, gfpgan_output_bgr = model.enhance(np_image_bgr, has_aligned=False, only_center_face=False, paste_back=True) - np_image = gfpgan_output_bgr[:, :, ::-1] - - model.face_helper.clean_all() - - if shared.opts.face_restoration_unload: - send_model_to(model, devices.cpu) - - return np_image - - -gfpgan_constructor = None - - -def setup_model(dirname): - global model_path - if not os.path.exists(model_path): - os.makedirs(model_path) - - try: - from gfpgan import GFPGANer - from facexlib import detection, parsing - global user_path - global have_gfpgan - global gfpgan_constructor - - load_file_from_url_orig = gfpgan.utils.load_file_from_url - facex_load_file_from_url_orig = facexlib.detection.load_file_from_url - facex_load_file_from_url_orig2 = facexlib.parsing.load_file_from_url - - def my_load_file_from_url(**kwargs): - return load_file_from_url_orig(**dict(kwargs, model_dir=model_path)) - - def facex_load_file_from_url(**kwargs): - return facex_load_file_from_url_orig(**dict(kwargs, save_dir=model_path, model_dir=None)) - - def facex_load_file_from_url2(**kwargs): - return facex_load_file_from_url_orig2(**dict(kwargs, save_dir=model_path, model_dir=None)) - - gfpgan.utils.load_file_from_url = my_load_file_from_url - facexlib.detection.load_file_from_url = facex_load_file_from_url - facexlib.parsing.load_file_from_url = facex_load_file_from_url2 - user_path = dirname - have_gfpgan = True - gfpgan_constructor = GFPGANer - - class FaceRestorerGFPGAN(modules.face_restoration.FaceRestoration): - def name(self): - return "GFPGAN" - - def restore(self, np_image): - return gfpgan_fix_faces(np_image) - - shared.face_restorers.append(FaceRestorerGFPGAN()) - except Exception: - print("Error setting up GFPGAN:", file=sys.stderr) - print(traceback.format_exc(), file=sys.stderr) diff --git a/spaces/bigscience-data/roots-search/app.py b/spaces/bigscience-data/roots-search/app.py deleted file mode 100644 index 366abdf53555b5dbd98a15429ad519c8cde99cc5..0000000000000000000000000000000000000000 --- a/spaces/bigscience-data/roots-search/app.py +++ /dev/null @@ -1,597 +0,0 @@ -import json -import os -import re -import string -import traceback -from typing import List, Tuple - -import gradio as gr -import requests -from huggingface_hub import HfApi - -hf_api = HfApi() -roots_datasets = { - dset.id.split("/")[-1]: dset - for dset in hf_api.list_datasets( - author="bigscience-data", use_auth_token=os.environ.get("bigscience_data_token") - ) -} - - -def get_docid_html(docid): - data_org, dataset, docid = docid.split("/") - metadata = roots_datasets[dataset] - locked_color = "LightGray" - open_color = "#7978FF" - if metadata.private: - docid_html = """ - - 🔒{dataset} - - /{docid}""".format( - dataset=dataset, - docid=docid, - locked_color=locked_color, - open_color=open_color, - ) - else: - docid_html = """ - - {dataset} - - /{docid}""".format( - metadata=metadata.tags[0].split(":")[-1], - dataset=dataset, - docid=docid, - open_color=open_color, - ) - return docid_html - - -PII_TAGS = {"KEY", "EMAIL", "USER", "IP_ADDRESS", "ID", "IPv4", "IPv6"} -PII_PREFIX = "PI:" - - -def process_pii(text): - for tag in PII_TAGS: - text = text.replace( - PII_PREFIX + tag, - """REDACTED {}""".format( - tag - ), - ) - return text - - -def extract_lang_from_docid(docid): - return docid.split("_")[1] - - -def normalize(document): - def remove_articles(text): - return re.sub(r"\b(a|an|the)\b", " ", text) - - def white_space_fix(text): - return " ".join(text.split()) - - def remove_punc(text): - exclude = set(string.punctuation) - return "".join(ch for ch in text if ch not in exclude) - - def lower(text): - return text.lower() - - return white_space_fix(remove_articles(remove_punc(lower(document)))) - - -def format_result(result, highlight_terms, exact_search, datasets_filter=None): - text, url, docid = result - if datasets_filter is not None: - datasets_filter = set(datasets_filter) - dataset = docid.split("/")[1] - if not dataset in datasets_filter: - return "" - - tokens_html = "" - if exact_search: - query_variants = [highlight_terms] - - # lower - query_variant = highlight_terms.lower() - if query_variant not in query_variants: - query_variants.append(query_variant) - - # upper - query_variant = highlight_terms.upper() - if query_variant not in query_variants: - query_variants.append(query_variant) - - # first capital - query_variant = highlight_terms.lower() - query_variant = query_variant[0].upper() + query_variant[1:].lower() - if query_variant not in query_variants: - query_variants.append(query_variant) - - # camel case - query_tokens = highlight_terms.split() - query_variant = " ".join( - [token[0].upper() + token[1:].lower() for token in query_tokens] - ) - if query_variant not in query_variants: - query_variants.append(query_variant) - - for query_variant in query_variants: - query_start = text.find(query_variant) - if query_start >= 0: - query_end = query_start + len(query_variant) - tokens_html = text[0:query_start] - tokens_html += "{}".format(text[query_start:query_end]) - tokens_html += text[query_end:] - break - else: - tokens = text.split() - tokens_html = [] - for token in tokens: - if token in highlight_terms: - tokens_html.append("{}".format(token)) - else: - tokens_html.append(token) - tokens_html = " ".join(tokens_html) - tokens_html = process_pii(tokens_html) - - url_html = ( - """ - - - {url} - -
      - """.format( - url=url - ) - if url is not None - else "" - ) - docid_html = get_docid_html(docid) - language = extract_lang_from_docid(docid) - result_html = """{} - Language: {} | - Document ID: {} | - - -
      - {}
      -
      - """.format( - url_html, language, docid_html, tokens_html - ) - return "

      " + result_html + "

      " - - -def format_result_page( - language, results, highlight_terms, num_results, exact_search, datasets_filter=None -) -> gr.HTML: - filtered_num_results = 0 - header_html = "" - - if language == "detect_language" and not exact_search: - header_html += """
      - Detected language: {}
      """.format( - list(results.keys())[0] - ) - - result_page_html = "" - for lang, results_for_lang in results.items(): - print("Processing language", lang) - if len(results_for_lang) == 0: - if exact_search: - result_page_html += """
      - No results found.
      """ - else: - result_page_html += """
      - No results for language: {}
      """.format( - lang - ) - continue - results_for_lang_html = "" - for result in results_for_lang: - result_html = format_result( - result, highlight_terms, exact_search, datasets_filter - ) - if result_html != "": - filtered_num_results += 1 - results_for_lang_html += result_html - if language == "all" and not exact_search: - results_for_lang_html = f""" -
      - - Results for language: {lang} - - {results_for_lang_html} -
      """ - result_page_html += results_for_lang_html - - if num_results is not None: - header_html += """
      - Total number of matches: {}
      """.format( - num_results - ) - return header_html + result_page_html - - -def extract_results_from_payload(query, language, payload, exact_search): - results = payload["results"] - processed_results = dict() - datasets = set() - highlight_terms = None - num_results = None - - if exact_search: - highlight_terms = query - num_results = payload["num_results"] - results = {"dummy": results} - else: - highlight_terms = payload["highlight_terms"] - - for lang, results_for_lang in results.items(): - processed_results[lang] = list() - for result in results_for_lang: - text = result["text"] - url = ( - result["meta"]["url"] - if "meta" in result - and result["meta"] is not None - and "url" in result["meta"] - else None - ) - docid = result["docid"] - _, dataset, _ = docid.split("/") - datasets.add(dataset) - processed_results[lang].append((text, url, docid)) - - return processed_results, highlight_terms, num_results, list(datasets) - - -def no_query_error_message(): - return f""" -

      - Please provide a non-empty query. -




      """ - - -def process_error(error_type, payload): - if error_type == "unsupported_lang": - detected_lang = payload["err"]["meta"]["detected_lang"] - return f""" -

      - Detected language {detected_lang} is not supported.
      - Please choose a language from the dropdown or type another query. -




      """ - - -def extract_error_from_payload(payload): - if "err" in payload: - return payload["err"]["type"] - return None - - -def request_payload(query, language, exact_search, num_results=10, received_results=0): - post_data = {"query": query, "k": num_results, "received_results": received_results} - if language != "detect_language": - post_data["lang"] = language - address = ( - os.environ.get("address_exact_search") - if exact_search - else os.environ.get("address") - ) - output = requests.post( - address, - headers={"Content-type": "application/json"}, - data=json.dumps(post_data), - timeout=120, - ) - payload = json.loads(output.text) - return payload - - -title = ( - """

      🌸 🔎 ROOTS search tool 🔍 🌸

      """ -) -description = """ - -The ROOTS corpus was developed during the [BigScience workshop](https://bigscience.huggingface.co/) for the purpose -of training the Multilingual Large Language Model [BLOOM](https://huggingface.co/bigscience/bloom). The ROOTS Search -Tool allows you to search through the ROOTS corpus. We serve a BM25 index for each language or group of languages -included in ROOTS. We also offer exact search which is enabled if you enclose your query in double quotes. More details -about the implementation and use cases is available in our [paper](https://arxiv.org/abs/2302.14035) - please cite it -if you use ROOTS Search Tool in your work. For more information and instructions on how to access the full corpus -consult [this form](https://forms.gle/qyYswbEL5kA23Wu99).""" - - -if __name__ == "__main__": - demo = gr.Blocks(css=".underline-on-hover:hover { text-decoration: underline; }") - - with demo: - processed_results_state = gr.State([]) - highlight_terms_state = gr.State([]) - num_results_state = gr.State(0) - exact_search_state = gr.State(False) - received_results_state = gr.State(0) - - with gr.Row(): - gr.Markdown(value=title) - with gr.Row(): - gr.Markdown(value=description) - with gr.Row(): - query = gr.Textbox( - lines=1, - max_lines=1, - placeholder="Put your query in double quotes for exact search.", - label="Query", - ) - with gr.Row(): - lang = gr.Dropdown( - choices=[ - "ar", - "ca", - "code", - "en", - "es", - "eu", - "fr", - "id", - "indic", - "nigercongo", - "pt", - "vi", - "zh", - "detect_language", - "all", - ], - value="en", - label="Language", - ) - k = gr.Slider( - 1, - 100, - value=10, - step=1, - label="Max Results in fuzzy search or Max Results per page in exact search", - ) - with gr.Row(): - submit_btn = gr.Button("Submit") - with gr.Row(visible=False) as datasets_filter: - available_datasets = gr.Dropdown( - type="value", - choices=[], - value=[], - label="Datasets Filter", - multiselect=True, - ) - with gr.Row(): - result_page_html = gr.HTML(label="Results") - - with gr.Row(visible=False) as pagination: - next_page_btn = gr.Button("Next Page") - - def run_query(query, lang, k, dropdown_input, received_results): - query = query.strip() - exact_search = False - if query.startswith('"') and query.endswith('"') and len(query) >= 2: - exact_search = True - query = query[1:-1] - else: - query = " ".join(query.split()) - if query == "" or query is None: - return ( - [], - [], - 0, - False, - no_query_error_message(), - [], - ) - - payload = request_payload(query, lang, exact_search, k, received_results) - err = extract_error_from_payload(payload) - if err is not None: - return ( - [], - [], - 0, - False, - process_error(err, payload), - [], - ) - - ( - processed_results, - highlight_terms, - num_results, - ds, - ) = extract_results_from_payload( - query, - lang, - payload, - exact_search, - ) - result_page = format_result_page( - lang, processed_results, highlight_terms, num_results, exact_search - ) - return ( - processed_results, - highlight_terms, - num_results, - exact_search, - result_page, - ds, - ) - - def submit(query, lang, k, dropdown_input): - print("submitting", query, lang, k) - ( - processed_results, - highlight_terms, - num_results, - exact_search, - result_page, - datasets, - ) = run_query(query, lang, k, dropdown_input, 0) - has_more_results = exact_search and (num_results > k) - current_results = ( - len(next(iter(processed_results.values()))) - if len(processed_results) > 0 - else 0 - ) - return [ - processed_results, - highlight_terms, - num_results, - exact_search, - gr.update(visible=True) - if current_results > 0 - else gr.update(visible=False), - gr.Dropdown.update(choices=datasets, value=datasets), - gr.update(visible=has_more_results), - current_results, - result_page, - ] - - def next_page( - query, - lang, - k, - dropdown_input, - received_results, - processed_results, - ): - ( - processed_results, - highlight_terms, - num_results, - exact_search, - result_page, - datasets, - ) = run_query(query, lang, k, dropdown_input, received_results) - current_results = sum( - len(results) for results in processed_results.values() - ) - has_more_results = exact_search and ( - received_results + current_results < num_results - ) - print("received_results", received_results) - print("current_results", current_results) - print("has_more_results", has_more_results) - return [ - processed_results, - highlight_terms, - num_results, - exact_search, - gr.update(visible=True) - if current_results > 0 - else gr.update(visible=False), - gr.Dropdown.update(choices=datasets, value=datasets), - gr.update(visible=current_results >= k and has_more_results), - received_results + current_results, - result_page, - ] - - def filter_datasets( - lang, - processed_results, - highlight_terms, - num_results, - exact_search, - datasets_filter, - ): - result_page_html = format_result_page( - lang, - processed_results, - highlight_terms, - num_results, - exact_search, - datasets_filter, - ) - return result_page_html - - query.submit( - fn=submit, - inputs=[query, lang, k, available_datasets], - outputs=[ - processed_results_state, - highlight_terms_state, - num_results_state, - exact_search_state, - datasets_filter, - available_datasets, - pagination, - received_results_state, - result_page_html, - ], - ) - submit_btn.click( - submit, - inputs=[query, lang, k, available_datasets], - outputs=[ - processed_results_state, - highlight_terms_state, - num_results_state, - exact_search_state, - datasets_filter, - available_datasets, - pagination, - received_results_state, - result_page_html, - ], - ) - - next_page_btn.click( - next_page, - inputs=[ - query, - lang, - k, - available_datasets, - received_results_state, - processed_results_state, - ], - outputs=[ - processed_results_state, - highlight_terms_state, - num_results_state, - exact_search_state, - datasets_filter, - available_datasets, - pagination, - received_results_state, - result_page_html, - ], - ) - - available_datasets.change( - filter_datasets, - inputs=[ - lang, - processed_results_state, - highlight_terms_state, - num_results_state, - exact_search_state, - available_datasets, - ], - outputs=result_page_html, - ) - demo.launch(enable_queue=False, debug=True) diff --git a/spaces/bioriAsaeru/text-to-voice/CRACK FBX 2011 Win64 How to Use Autodesk Inventor 2011 with FBX Plug-in.md b/spaces/bioriAsaeru/text-to-voice/CRACK FBX 2011 Win64 How to Use Autodesk Inventor 2011 with FBX Plug-in.md deleted file mode 100644 index 5d981ce5c5c4f053a27db09bd965a5a09a098edf..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/CRACK FBX 2011 Win64 How to Use Autodesk Inventor 2011 with FBX Plug-in.md +++ /dev/null @@ -1,11 +0,0 @@ -
      -

      Portable PotPlayer 1.7.21873 PotPlayer is a fast and lightweight multimedia player that supports most popular audio and video formats without any codec packs and can be customized with skins. Using a clean design, PotPlayer removes all the clutter found in most modern media players...

      -

      3d Video Player 4.5.4 Crackl Fixed


      Download ››› https://urloso.com/2uyRsR



      -

      JPEGView 1.1.43 Lightweight portable image viewer designed to help you manage and view your image files, optimized to be small and fast, with strong and rich functionality. On-the-fly image processing is provided - allowing to adjust typical parameters as sharpness, color... ImPlay 1.2.1 ImPlay is an innovative multimedia player that utilizes the MPV engine to provide an unparalleled user experience. It offers a sleek and modern design with a highly customizable interface, allowing you to personalize your music and video experience... XMPlay 3.8.5.43 XMPlay is a full-featured audio player, supporting OGG, MP3, MP2, WMA, WAV, AIFF, MO3, IT, XM, S3M, MTM, MOD, UMX audio formats, and PLS, M3U, ASX, WAX playlists. A load more formats are also supported via plugins. You might have heard that...MusikCube 0.99.5 Fully functional terminal-based music player, library, and streaming audio server that runs natively on Windows, macOS, and Linux. It also runs well on a Raspberry Pi with a custom DAC (e.g. IQaudIO DAC+, HiFiBerry DAC+ and others), and can output 24bit... Universal Media Server 13.2.0 Universal Media Server (UMS) is a free DLNA, UPnP and HTTP/S Media Server. Stream your media to your devices, whether they are TVs, smartphones, gaming consoles, computers, audio receivers, and more! UMS streams or transcodes many media formats...

      -

      QMPlay2 23.02.05 Feature-rich, open source multimedia player backed by the amazing FFmpeg tool. It stands out for its impressive list of supported formats, flexibility and simplicity. The long list of features include video snapshots, media library, video controls, visualizations... VideoInspector Lite (Adfree)2.15.10.154 Program that gives you more information about your video files. With VideoInspector you'll know why your video files has no sound or refuses to play correctly. It will help you install the required Codecs (coder/decoder software) to play a specific file. It can... sView 23.02.4 sView is a stereoscopic media player which supports image viewing and movie playback. The list of supported formats is very large thanks to the FFmpeg framework including MKV, WebM, OGM, AVI, FLAC, WMV, MOV, BIK (BINK video), FLV, VOB, MPEG, WMA... ImageGrab 7.0.1 User-friendly software that extract images from videos in .bmp or .jpeg with adjustable quality. The program includes also an intervalometer feature, that helps extract automatically a lot of pictures at given intervals. Text may be inlayed in the snapshot... mpv 0.35.210 mpv is a fork of mplayer2 and MPlayer with support for a wide variety of video file formats, audio and video codecs, and subtitle types. It offers hardware acceleration via FFmpeg, with support for VDPAU and VAAPI plus DXVA2 on Windows, and VDA and...

      -

      Music Collection 3.5.7.0 Music Collection allows you to easily archive your music library. Using it you can enter in a collection any kind of music media that you own or you intend to. You can add or edit any kind of information concerning the albums in the collection, no matter if it's... AIMP 5.11 Build 2421 AIMP is a powerful audio player and internet radio player that allows you to listen to CDA, AAC, AC3, APE, DTS, FLAC, MP3, OGG, OPUS, WAV, WMA, MIDI,... etc. and record your music with an outstanding sound quality. AIMP includes a 18-band equalizer with... Shutter Encoder 16.8 Shutter Encoder is a surprisingly powerful, if unconventional, video, audio and image converter based on FFmpeg and other great tools. It has been designed by video editors in order to be as accessible and efficient as possible. The program comes with a host...KataLib 3.16.0 KataLib is a surprisingly powerful music player with some useful extra features. It's packed with a audio player, metadata editor, audio converter and more. The audio player module can play any audio file or streaming videos, DJ mode with auto or manual... VidCoder 8.24 VidCoder is an open-source DVD/Blu-ray ripping and video transcoding application. It uses HandBrake as its encoding engine. Calling directly into the HandBrake library gives it a more rich UI than the official HandBrake Windows GUI. Easily batch convert... Video Compressor 2023.4.1.26 Video Compressor is reliable piece of software that can impressively reduce size of video files in all popular video formats without reducing quality. Change the video resolution, set any video and audio bitrates, use the most efficient video codecs: H.265 and... Subtitle Edit 3.6.11 Subtitle Edit is a free editor for video subtitles - a subtitle editor. With Subtitle Edit you can easily adjust a subtitle if it is out of sync with the video and much more. The program can read, write, and convert between 100+ subtitle formats, like: SubRip (*.srt)...

      -

      HEIC Converter 2.0.5 Convert HEIC to JPG, HEIC to PNG, HEVC (H.265) to AVC/MP4 (H.264). Drag and drop files or folders to HEIC Converter and click Convert, that's it. You can also keep Exif data in the process of conversion. For now, it's simply the only free solution available...KMPlayer Plus Adfree 2023.1.26.12 The KMPlayer Professional Media Player is a versatile media player which can cover various types of container formats such as AVI, MKV, HEVC (H265 codec) among others, without the need of codec packs. It handles a wide range of subtitles and allows you to... Ashampoo Burning Studio 2023 1.24.0 Ashampoo Burning Studio 2023 is a powerful disc burning software for CD, DVD and Blu-ray discs. The software quickly burns files, audio and video to all recordable disc types but also specialized media such as BDXL or M-Disc. The built-in disc ripping detects... Chasys Draw IES 5.23.01 Chasys Draw IES is a excellent set of photo editing and image converting tools. The program includes a layer-based image editor with animation, icon editing support and super-resolution via image stacking (Chasys Draw IES Artist), a multi-threaded image file.. ShareX 15.0 Take screenshots, save them to Windows clipboard, hard disk or upload them to over 80+ different remote locations including Imgur, Flicker, Twitpic, Dropbox, Pastebin and others, using simple hotkey combinations. ShareX can enhance the capture with... HandBrake 1.6.1 Open-source multithreaded video transcoder for Windows. HandBrake grabs video from a variety of sources, including DVDs, audio from sources as well, including MPEG audio tracks. You'll then be able to output a digital file in a variety of formats, including MKV... Display Driver Uninstaller (DDU) 18.0.6.0 Display Driver Uninstaller is a display adapter driver uninstaller that can help you clean uninstall graphics driver such as NVIDIA, Intel, AMD (ATI) and Realtek and Creative SoundBlaster audio drivers, without leaving leftovers behind (such as registry keys... Caesium Image Compressor 2.3.0 Caesium is a free, advanced images compression tool for photos and images (JPG, GIF, PNG, JPG, WebP), supporting batch, preview and many more. Caesium reduces the size of your pictures up to 90%, preserving the original quality. Caesium allows you to...Now, let's get back to work so we have time to make popcorn before our favourite show!

      -

      -

      DVDFab is intended to make a video with 3D impacts from standard 2D video records. The application can make an anaglyph video naturally from the document of any famous arrangements, including MP4, DVD, AVI, WMV, MPG, OGG, MKV, and so forth. Likewise, you can pick any arrangement of the subsequent video record and profile for practically any media player.

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/data/__init__.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/data/__init__.py deleted file mode 100644 index 2906ff12bc85a894837579f3137f6f71a0438329..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/data/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Audio loading and writing support. Datasets for raw audio -or also including some metadata.""" - -# flake8: noqa -from . import audio, audio_dataset, info_audio_dataset, music_dataset, sound_dataset diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/config/defaults.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/config/defaults.py deleted file mode 100644 index bd2a5f6b2de4af2caa1f65c64ab93a5e3ac21780..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/config/defaults.py +++ /dev/null @@ -1,650 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .config import CfgNode as CN - -# NOTE: given the new config system -# (https://detectron2.readthedocs.io/en/latest/tutorials/lazyconfigs.html), -# we will stop adding new functionalities to default CfgNode. - -# ----------------------------------------------------------------------------- -# Convention about Training / Test specific parameters -# ----------------------------------------------------------------------------- -# Whenever an argument can be either used for training or for testing, the -# corresponding name will be post-fixed by a _TRAIN for a training parameter, -# or _TEST for a test-specific parameter. -# For example, the number of images during training will be -# IMAGES_PER_BATCH_TRAIN, while the number of images for testing will be -# IMAGES_PER_BATCH_TEST - -# ----------------------------------------------------------------------------- -# Config definition -# ----------------------------------------------------------------------------- - -_C = CN() - -# The version number, to upgrade from old configs to new ones if any -# changes happen. It's recommended to keep a VERSION in your config file. -_C.VERSION = 2 - -_C.MODEL = CN() -_C.MODEL.LOAD_PROPOSALS = False -_C.MODEL.MASK_ON = False -_C.MODEL.KEYPOINT_ON = False -_C.MODEL.DEVICE = "cuda" -_C.MODEL.META_ARCHITECTURE = "GeneralizedRCNN" - -# Path (a file path, or URL like detectron2://.., https://..) to a checkpoint file -# to be loaded to the model. You can find available models in the model zoo. -_C.MODEL.WEIGHTS = "" - -# Values to be used for image normalization (BGR order, since INPUT.FORMAT defaults to BGR). -# To train on images of different number of channels, just set different mean & std. -# Default values are the mean pixel value from ImageNet: [103.53, 116.28, 123.675] -_C.MODEL.PIXEL_MEAN = [103.530, 116.280, 123.675] -# When using pre-trained models in Detectron1 or any MSRA models, -# std has been absorbed into its conv1 weights, so the std needs to be set 1. -# Otherwise, you can use [57.375, 57.120, 58.395] (ImageNet std) -_C.MODEL.PIXEL_STD = [1.0, 1.0, 1.0] - - -# ----------------------------------------------------------------------------- -# INPUT -# ----------------------------------------------------------------------------- -_C.INPUT = CN() -# By default, {MIN,MAX}_SIZE options are used in transforms.ResizeShortestEdge. -# Please refer to ResizeShortestEdge for detailed definition. -# Size of the smallest side of the image during training -_C.INPUT.MIN_SIZE_TRAIN = (800,) -# Sample size of smallest side by choice or random selection from range give by -# INPUT.MIN_SIZE_TRAIN -_C.INPUT.MIN_SIZE_TRAIN_SAMPLING = "choice" -# Maximum size of the side of the image during training -_C.INPUT.MAX_SIZE_TRAIN = 1333 -# Size of the smallest side of the image during testing. Set to zero to disable resize in testing. -_C.INPUT.MIN_SIZE_TEST = 800 -# Maximum size of the side of the image during testing -_C.INPUT.MAX_SIZE_TEST = 1333 -# Mode for flipping images used in data augmentation during training -# choose one of ["horizontal, "vertical", "none"] -_C.INPUT.RANDOM_FLIP = "horizontal" - -# `True` if cropping is used for data augmentation during training -_C.INPUT.CROP = CN({"ENABLED": False}) -# Cropping type. See documentation of `detectron2.data.transforms.RandomCrop` for explanation. -_C.INPUT.CROP.TYPE = "relative_range" -# Size of crop in range (0, 1] if CROP.TYPE is "relative" or "relative_range" and in number of -# pixels if CROP.TYPE is "absolute" -_C.INPUT.CROP.SIZE = [0.9, 0.9] - - -# Whether the model needs RGB, YUV, HSV etc. -# Should be one of the modes defined here, as we use PIL to read the image: -# https://pillow.readthedocs.io/en/stable/handbook/concepts.html#concept-modes -# with BGR being the one exception. One can set image format to BGR, we will -# internally use RGB for conversion and flip the channels over -_C.INPUT.FORMAT = "BGR" -# The ground truth mask format that the model will use. -# Mask R-CNN supports either "polygon" or "bitmask" as ground truth. -_C.INPUT.MASK_FORMAT = "polygon" # alternative: "bitmask" - - -# ----------------------------------------------------------------------------- -# Dataset -# ----------------------------------------------------------------------------- -_C.DATASETS = CN() -# List of the dataset names for training. Must be registered in DatasetCatalog -# Samples from these datasets will be merged and used as one dataset. -_C.DATASETS.TRAIN = () -# List of the pre-computed proposal files for training, which must be consistent -# with datasets listed in DATASETS.TRAIN. -_C.DATASETS.PROPOSAL_FILES_TRAIN = () -# Number of top scoring precomputed proposals to keep for training -_C.DATASETS.PRECOMPUTED_PROPOSAL_TOPK_TRAIN = 2000 -# List of the dataset names for testing. Must be registered in DatasetCatalog -_C.DATASETS.TEST = () -# List of the pre-computed proposal files for test, which must be consistent -# with datasets listed in DATASETS.TEST. -_C.DATASETS.PROPOSAL_FILES_TEST = () -# Number of top scoring precomputed proposals to keep for test -_C.DATASETS.PRECOMPUTED_PROPOSAL_TOPK_TEST = 1000 - -# ----------------------------------------------------------------------------- -# DataLoader -# ----------------------------------------------------------------------------- -_C.DATALOADER = CN() -# Number of data loading threads -_C.DATALOADER.NUM_WORKERS = 4 -# If True, each batch should contain only images for which the aspect ratio -# is compatible. This groups portrait images together, and landscape images -# are not batched with portrait images. -_C.DATALOADER.ASPECT_RATIO_GROUPING = True -# Options: TrainingSampler, RepeatFactorTrainingSampler -_C.DATALOADER.SAMPLER_TRAIN = "TrainingSampler" -# Repeat threshold for RepeatFactorTrainingSampler -_C.DATALOADER.REPEAT_THRESHOLD = 0.0 -# Tf True, when working on datasets that have instance annotations, the -# training dataloader will filter out images without associated annotations -_C.DATALOADER.FILTER_EMPTY_ANNOTATIONS = True - -# ---------------------------------------------------------------------------- # -# Backbone options -# ---------------------------------------------------------------------------- # -_C.MODEL.BACKBONE = CN() - -_C.MODEL.BACKBONE.NAME = "build_resnet_backbone" -# Freeze the first several stages so they are not trained. -# There are 5 stages in ResNet. The first is a convolution, and the following -# stages are each group of residual blocks. -_C.MODEL.BACKBONE.FREEZE_AT = 2 - - -# ---------------------------------------------------------------------------- # -# FPN options -# ---------------------------------------------------------------------------- # -_C.MODEL.FPN = CN() -# Names of the input feature maps to be used by FPN -# They must have contiguous power of 2 strides -# e.g., ["res2", "res3", "res4", "res5"] -_C.MODEL.FPN.IN_FEATURES = [] -_C.MODEL.FPN.OUT_CHANNELS = 256 - -# Options: "" (no norm), "GN" -_C.MODEL.FPN.NORM = "" - -# Types for fusing the FPN top-down and lateral features. Can be either "sum" or "avg" -_C.MODEL.FPN.FUSE_TYPE = "sum" - - -# ---------------------------------------------------------------------------- # -# Proposal generator options -# ---------------------------------------------------------------------------- # -_C.MODEL.PROPOSAL_GENERATOR = CN() -# Current proposal generators include "RPN", "RRPN" and "PrecomputedProposals" -_C.MODEL.PROPOSAL_GENERATOR.NAME = "RPN" -# Proposal height and width both need to be greater than MIN_SIZE -# (a the scale used during training or inference) -_C.MODEL.PROPOSAL_GENERATOR.MIN_SIZE = 0 - - -# ---------------------------------------------------------------------------- # -# Anchor generator options -# ---------------------------------------------------------------------------- # -_C.MODEL.ANCHOR_GENERATOR = CN() -# The generator can be any name in the ANCHOR_GENERATOR registry -_C.MODEL.ANCHOR_GENERATOR.NAME = "DefaultAnchorGenerator" -# Anchor sizes (i.e. sqrt of area) in absolute pixels w.r.t. the network input. -# Format: list[list[float]]. SIZES[i] specifies the list of sizes to use for -# IN_FEATURES[i]; len(SIZES) must be equal to len(IN_FEATURES) or 1. -# When len(SIZES) == 1, SIZES[0] is used for all IN_FEATURES. -_C.MODEL.ANCHOR_GENERATOR.SIZES = [[32, 64, 128, 256, 512]] -# Anchor aspect ratios. For each area given in `SIZES`, anchors with different aspect -# ratios are generated by an anchor generator. -# Format: list[list[float]]. ASPECT_RATIOS[i] specifies the list of aspect ratios (H/W) -# to use for IN_FEATURES[i]; len(ASPECT_RATIOS) == len(IN_FEATURES) must be true, -# or len(ASPECT_RATIOS) == 1 is true and aspect ratio list ASPECT_RATIOS[0] is used -# for all IN_FEATURES. -_C.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS = [[0.5, 1.0, 2.0]] -# Anchor angles. -# list[list[float]], the angle in degrees, for each input feature map. -# ANGLES[i] specifies the list of angles for IN_FEATURES[i]. -_C.MODEL.ANCHOR_GENERATOR.ANGLES = [[-90, 0, 90]] -# Relative offset between the center of the first anchor and the top-left corner of the image -# Value has to be in [0, 1). Recommend to use 0.5, which means half stride. -# The value is not expected to affect model accuracy. -_C.MODEL.ANCHOR_GENERATOR.OFFSET = 0.0 - -# ---------------------------------------------------------------------------- # -# RPN options -# ---------------------------------------------------------------------------- # -_C.MODEL.RPN = CN() -_C.MODEL.RPN.HEAD_NAME = "StandardRPNHead" # used by RPN_HEAD_REGISTRY - -# Names of the input feature maps to be used by RPN -# e.g., ["p2", "p3", "p4", "p5", "p6"] for FPN -_C.MODEL.RPN.IN_FEATURES = ["res4"] -# Remove RPN anchors that go outside the image by BOUNDARY_THRESH pixels -# Set to -1 or a large value, e.g. 100000, to disable pruning anchors -_C.MODEL.RPN.BOUNDARY_THRESH = -1 -# IOU overlap ratios [BG_IOU_THRESHOLD, FG_IOU_THRESHOLD] -# Minimum overlap required between an anchor and ground-truth box for the -# (anchor, gt box) pair to be a positive example (IoU >= FG_IOU_THRESHOLD -# ==> positive RPN example: 1) -# Maximum overlap allowed between an anchor and ground-truth box for the -# (anchor, gt box) pair to be a negative examples (IoU < BG_IOU_THRESHOLD -# ==> negative RPN example: 0) -# Anchors with overlap in between (BG_IOU_THRESHOLD <= IoU < FG_IOU_THRESHOLD) -# are ignored (-1) -_C.MODEL.RPN.IOU_THRESHOLDS = [0.3, 0.7] -_C.MODEL.RPN.IOU_LABELS = [0, -1, 1] -# Number of regions per image used to train RPN -_C.MODEL.RPN.BATCH_SIZE_PER_IMAGE = 256 -# Target fraction of foreground (positive) examples per RPN minibatch -_C.MODEL.RPN.POSITIVE_FRACTION = 0.5 -# Options are: "smooth_l1", "giou", "diou", "ciou" -_C.MODEL.RPN.BBOX_REG_LOSS_TYPE = "smooth_l1" -_C.MODEL.RPN.BBOX_REG_LOSS_WEIGHT = 1.0 -# Weights on (dx, dy, dw, dh) for normalizing RPN anchor regression targets -_C.MODEL.RPN.BBOX_REG_WEIGHTS = (1.0, 1.0, 1.0, 1.0) -# The transition point from L1 to L2 loss. Set to 0.0 to make the loss simply L1. -_C.MODEL.RPN.SMOOTH_L1_BETA = 0.0 -_C.MODEL.RPN.LOSS_WEIGHT = 1.0 -# Number of top scoring RPN proposals to keep before applying NMS -# When FPN is used, this is *per FPN level* (not total) -_C.MODEL.RPN.PRE_NMS_TOPK_TRAIN = 12000 -_C.MODEL.RPN.PRE_NMS_TOPK_TEST = 6000 -# Number of top scoring RPN proposals to keep after applying NMS -# When FPN is used, this limit is applied per level and then again to the union -# of proposals from all levels -# NOTE: When FPN is used, the meaning of this config is different from Detectron1. -# It means per-batch topk in Detectron1, but per-image topk here. -# See the "find_top_rpn_proposals" function for details. -_C.MODEL.RPN.POST_NMS_TOPK_TRAIN = 2000 -_C.MODEL.RPN.POST_NMS_TOPK_TEST = 1000 -# NMS threshold used on RPN proposals -_C.MODEL.RPN.NMS_THRESH = 0.7 -# Set this to -1 to use the same number of output channels as input channels. -_C.MODEL.RPN.CONV_DIMS = [-1] - -# ---------------------------------------------------------------------------- # -# ROI HEADS options -# ---------------------------------------------------------------------------- # -_C.MODEL.ROI_HEADS = CN() -_C.MODEL.ROI_HEADS.NAME = "Res5ROIHeads" -# Number of foreground classes -_C.MODEL.ROI_HEADS.NUM_CLASSES = 80 -# Names of the input feature maps to be used by ROI heads -# Currently all heads (box, mask, ...) use the same input feature map list -# e.g., ["p2", "p3", "p4", "p5"] is commonly used for FPN -_C.MODEL.ROI_HEADS.IN_FEATURES = ["res4"] -# IOU overlap ratios [IOU_THRESHOLD] -# Overlap threshold for an RoI to be considered background (if < IOU_THRESHOLD) -# Overlap threshold for an RoI to be considered foreground (if >= IOU_THRESHOLD) -_C.MODEL.ROI_HEADS.IOU_THRESHOLDS = [0.5] -_C.MODEL.ROI_HEADS.IOU_LABELS = [0, 1] -# RoI minibatch size *per image* (number of regions of interest [ROIs]) during training -# Total number of RoIs per training minibatch = -# ROI_HEADS.BATCH_SIZE_PER_IMAGE * SOLVER.IMS_PER_BATCH -# E.g., a common configuration is: 512 * 16 = 8192 -_C.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 512 -# Target fraction of RoI minibatch that is labeled foreground (i.e. class > 0) -_C.MODEL.ROI_HEADS.POSITIVE_FRACTION = 0.25 - -# Only used on test mode - -# Minimum score threshold (assuming scores in a [0, 1] range); a value chosen to -# balance obtaining high recall with not having too many low precision -# detections that will slow down inference post processing steps (like NMS) -# A default threshold of 0.0 increases AP by ~0.2-0.3 but significantly slows down -# inference. -_C.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.05 -# Overlap threshold used for non-maximum suppression (suppress boxes with -# IoU >= this threshold) -_C.MODEL.ROI_HEADS.NMS_THRESH_TEST = 0.5 -# If True, augment proposals with ground-truth boxes before sampling proposals to -# train ROI heads. -_C.MODEL.ROI_HEADS.PROPOSAL_APPEND_GT = True - -# ---------------------------------------------------------------------------- # -# Box Head -# ---------------------------------------------------------------------------- # -_C.MODEL.ROI_BOX_HEAD = CN() -# C4 don't use head name option -# Options for non-C4 models: FastRCNNConvFCHead, -_C.MODEL.ROI_BOX_HEAD.NAME = "" -# Options are: "smooth_l1", "giou", "diou", "ciou" -_C.MODEL.ROI_BOX_HEAD.BBOX_REG_LOSS_TYPE = "smooth_l1" -# The final scaling coefficient on the box regression loss, used to balance the magnitude of its -# gradients with other losses in the model. See also `MODEL.ROI_KEYPOINT_HEAD.LOSS_WEIGHT`. -_C.MODEL.ROI_BOX_HEAD.BBOX_REG_LOSS_WEIGHT = 1.0 -# Default weights on (dx, dy, dw, dh) for normalizing bbox regression targets -# These are empirically chosen to approximately lead to unit variance targets -_C.MODEL.ROI_BOX_HEAD.BBOX_REG_WEIGHTS = (10.0, 10.0, 5.0, 5.0) -# The transition point from L1 to L2 loss. Set to 0.0 to make the loss simply L1. -_C.MODEL.ROI_BOX_HEAD.SMOOTH_L1_BETA = 0.0 -_C.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION = 14 -_C.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO = 0 -# Type of pooling operation applied to the incoming feature map for each RoI -_C.MODEL.ROI_BOX_HEAD.POOLER_TYPE = "ROIAlignV2" - -_C.MODEL.ROI_BOX_HEAD.NUM_FC = 0 -# Hidden layer dimension for FC layers in the RoI box head -_C.MODEL.ROI_BOX_HEAD.FC_DIM = 1024 -_C.MODEL.ROI_BOX_HEAD.NUM_CONV = 0 -# Channel dimension for Conv layers in the RoI box head -_C.MODEL.ROI_BOX_HEAD.CONV_DIM = 256 -# Normalization method for the convolution layers. -# Options: "" (no norm), "GN", "SyncBN". -_C.MODEL.ROI_BOX_HEAD.NORM = "" -# Whether to use class agnostic for bbox regression -_C.MODEL.ROI_BOX_HEAD.CLS_AGNOSTIC_BBOX_REG = False -# If true, RoI heads use bounding boxes predicted by the box head rather than proposal boxes. -_C.MODEL.ROI_BOX_HEAD.TRAIN_ON_PRED_BOXES = False - -# Federated loss can be used to improve the training of LVIS -_C.MODEL.ROI_BOX_HEAD.USE_FED_LOSS = False -# Sigmoid cross entrophy is used with federated loss -_C.MODEL.ROI_BOX_HEAD.USE_SIGMOID_CE = False -# The power value applied to image_count when calcualting frequency weight -_C.MODEL.ROI_BOX_HEAD.FED_LOSS_FREQ_WEIGHT_POWER = 0.5 -# Number of classes to keep in total -_C.MODEL.ROI_BOX_HEAD.FED_LOSS_NUM_CLASSES = 50 - -# ---------------------------------------------------------------------------- # -# Cascaded Box Head -# ---------------------------------------------------------------------------- # -_C.MODEL.ROI_BOX_CASCADE_HEAD = CN() -# The number of cascade stages is implicitly defined by the length of the following two configs. -_C.MODEL.ROI_BOX_CASCADE_HEAD.BBOX_REG_WEIGHTS = ( - (10.0, 10.0, 5.0, 5.0), - (20.0, 20.0, 10.0, 10.0), - (30.0, 30.0, 15.0, 15.0), -) -_C.MODEL.ROI_BOX_CASCADE_HEAD.IOUS = (0.5, 0.6, 0.7) - - -# ---------------------------------------------------------------------------- # -# Mask Head -# ---------------------------------------------------------------------------- # -_C.MODEL.ROI_MASK_HEAD = CN() -_C.MODEL.ROI_MASK_HEAD.NAME = "MaskRCNNConvUpsampleHead" -_C.MODEL.ROI_MASK_HEAD.POOLER_RESOLUTION = 14 -_C.MODEL.ROI_MASK_HEAD.POOLER_SAMPLING_RATIO = 0 -_C.MODEL.ROI_MASK_HEAD.NUM_CONV = 0 # The number of convs in the mask head -_C.MODEL.ROI_MASK_HEAD.CONV_DIM = 256 -# Normalization method for the convolution layers. -# Options: "" (no norm), "GN", "SyncBN". -_C.MODEL.ROI_MASK_HEAD.NORM = "" -# Whether to use class agnostic for mask prediction -_C.MODEL.ROI_MASK_HEAD.CLS_AGNOSTIC_MASK = False -# Type of pooling operation applied to the incoming feature map for each RoI -_C.MODEL.ROI_MASK_HEAD.POOLER_TYPE = "ROIAlignV2" - - -# ---------------------------------------------------------------------------- # -# Keypoint Head -# ---------------------------------------------------------------------------- # -_C.MODEL.ROI_KEYPOINT_HEAD = CN() -_C.MODEL.ROI_KEYPOINT_HEAD.NAME = "KRCNNConvDeconvUpsampleHead" -_C.MODEL.ROI_KEYPOINT_HEAD.POOLER_RESOLUTION = 14 -_C.MODEL.ROI_KEYPOINT_HEAD.POOLER_SAMPLING_RATIO = 0 -_C.MODEL.ROI_KEYPOINT_HEAD.CONV_DIMS = tuple(512 for _ in range(8)) -_C.MODEL.ROI_KEYPOINT_HEAD.NUM_KEYPOINTS = 17 # 17 is the number of keypoints in COCO. - -# Images with too few (or no) keypoints are excluded from training. -_C.MODEL.ROI_KEYPOINT_HEAD.MIN_KEYPOINTS_PER_IMAGE = 1 -# Normalize by the total number of visible keypoints in the minibatch if True. -# Otherwise, normalize by the total number of keypoints that could ever exist -# in the minibatch. -# The keypoint softmax loss is only calculated on visible keypoints. -# Since the number of visible keypoints can vary significantly between -# minibatches, this has the effect of up-weighting the importance of -# minibatches with few visible keypoints. (Imagine the extreme case of -# only one visible keypoint versus N: in the case of N, each one -# contributes 1/N to the gradient compared to the single keypoint -# determining the gradient direction). Instead, we can normalize the -# loss by the total number of keypoints, if it were the case that all -# keypoints were visible in a full minibatch. (Returning to the example, -# this means that the one visible keypoint contributes as much as each -# of the N keypoints.) -_C.MODEL.ROI_KEYPOINT_HEAD.NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS = True -# Multi-task loss weight to use for keypoints -# Recommended values: -# - use 1.0 if NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS is True -# - use 4.0 if NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS is False -_C.MODEL.ROI_KEYPOINT_HEAD.LOSS_WEIGHT = 1.0 -# Type of pooling operation applied to the incoming feature map for each RoI -_C.MODEL.ROI_KEYPOINT_HEAD.POOLER_TYPE = "ROIAlignV2" - -# ---------------------------------------------------------------------------- # -# Semantic Segmentation Head -# ---------------------------------------------------------------------------- # -_C.MODEL.SEM_SEG_HEAD = CN() -_C.MODEL.SEM_SEG_HEAD.NAME = "SemSegFPNHead" -_C.MODEL.SEM_SEG_HEAD.IN_FEATURES = ["p2", "p3", "p4", "p5"] -# Label in the semantic segmentation ground truth that is ignored, i.e., no loss is calculated for -# the correposnding pixel. -_C.MODEL.SEM_SEG_HEAD.IGNORE_VALUE = 255 -# Number of classes in the semantic segmentation head -_C.MODEL.SEM_SEG_HEAD.NUM_CLASSES = 54 -# Number of channels in the 3x3 convs inside semantic-FPN heads. -_C.MODEL.SEM_SEG_HEAD.CONVS_DIM = 128 -# Outputs from semantic-FPN heads are up-scaled to the COMMON_STRIDE stride. -_C.MODEL.SEM_SEG_HEAD.COMMON_STRIDE = 4 -# Normalization method for the convolution layers. Options: "" (no norm), "GN". -_C.MODEL.SEM_SEG_HEAD.NORM = "GN" -_C.MODEL.SEM_SEG_HEAD.LOSS_WEIGHT = 1.0 - -_C.MODEL.PANOPTIC_FPN = CN() -# Scaling of all losses from instance detection / segmentation head. -_C.MODEL.PANOPTIC_FPN.INSTANCE_LOSS_WEIGHT = 1.0 - -# options when combining instance & semantic segmentation outputs -_C.MODEL.PANOPTIC_FPN.COMBINE = CN({"ENABLED": True}) # "COMBINE.ENABLED" is deprecated & not used -_C.MODEL.PANOPTIC_FPN.COMBINE.OVERLAP_THRESH = 0.5 -_C.MODEL.PANOPTIC_FPN.COMBINE.STUFF_AREA_LIMIT = 4096 -_C.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH = 0.5 - - -# ---------------------------------------------------------------------------- # -# RetinaNet Head -# ---------------------------------------------------------------------------- # -_C.MODEL.RETINANET = CN() - -# This is the number of foreground classes. -_C.MODEL.RETINANET.NUM_CLASSES = 80 - -_C.MODEL.RETINANET.IN_FEATURES = ["p3", "p4", "p5", "p6", "p7"] - -# Convolutions to use in the cls and bbox tower -# NOTE: this doesn't include the last conv for logits -_C.MODEL.RETINANET.NUM_CONVS = 4 - -# IoU overlap ratio [bg, fg] for labeling anchors. -# Anchors with < bg are labeled negative (0) -# Anchors with >= bg and < fg are ignored (-1) -# Anchors with >= fg are labeled positive (1) -_C.MODEL.RETINANET.IOU_THRESHOLDS = [0.4, 0.5] -_C.MODEL.RETINANET.IOU_LABELS = [0, -1, 1] - -# Prior prob for rare case (i.e. foreground) at the beginning of training. -# This is used to set the bias for the logits layer of the classifier subnet. -# This improves training stability in the case of heavy class imbalance. -_C.MODEL.RETINANET.PRIOR_PROB = 0.01 - -# Inference cls score threshold, only anchors with score > INFERENCE_TH are -# considered for inference (to improve speed) -_C.MODEL.RETINANET.SCORE_THRESH_TEST = 0.05 -# Select topk candidates before NMS -_C.MODEL.RETINANET.TOPK_CANDIDATES_TEST = 1000 -_C.MODEL.RETINANET.NMS_THRESH_TEST = 0.5 - -# Weights on (dx, dy, dw, dh) for normalizing Retinanet anchor regression targets -_C.MODEL.RETINANET.BBOX_REG_WEIGHTS = (1.0, 1.0, 1.0, 1.0) - -# Loss parameters -_C.MODEL.RETINANET.FOCAL_LOSS_GAMMA = 2.0 -_C.MODEL.RETINANET.FOCAL_LOSS_ALPHA = 0.25 -_C.MODEL.RETINANET.SMOOTH_L1_LOSS_BETA = 0.1 -# Options are: "smooth_l1", "giou", "diou", "ciou" -_C.MODEL.RETINANET.BBOX_REG_LOSS_TYPE = "smooth_l1" - -# One of BN, SyncBN, FrozenBN, GN -# Only supports GN until unshared norm is implemented -_C.MODEL.RETINANET.NORM = "" - - -# ---------------------------------------------------------------------------- # -# ResNe[X]t options (ResNets = {ResNet, ResNeXt} -# Note that parts of a resnet may be used for both the backbone and the head -# These options apply to both -# ---------------------------------------------------------------------------- # -_C.MODEL.RESNETS = CN() - -_C.MODEL.RESNETS.DEPTH = 50 -_C.MODEL.RESNETS.OUT_FEATURES = ["res4"] # res4 for C4 backbone, res2..5 for FPN backbone - -# Number of groups to use; 1 ==> ResNet; > 1 ==> ResNeXt -_C.MODEL.RESNETS.NUM_GROUPS = 1 - -# Options: FrozenBN, GN, "SyncBN", "BN" -_C.MODEL.RESNETS.NORM = "FrozenBN" - -# Baseline width of each group. -# Scaling this parameters will scale the width of all bottleneck layers. -_C.MODEL.RESNETS.WIDTH_PER_GROUP = 64 - -# Place the stride 2 conv on the 1x1 filter -# Use True only for the original MSRA ResNet; use False for C2 and Torch models -_C.MODEL.RESNETS.STRIDE_IN_1X1 = True - -# Apply dilation in stage "res5" -_C.MODEL.RESNETS.RES5_DILATION = 1 - -# Output width of res2. Scaling this parameters will scale the width of all 1x1 convs in ResNet -# For R18 and R34, this needs to be set to 64 -_C.MODEL.RESNETS.RES2_OUT_CHANNELS = 256 -_C.MODEL.RESNETS.STEM_OUT_CHANNELS = 64 - -# Apply Deformable Convolution in stages -# Specify if apply deform_conv on Res2, Res3, Res4, Res5 -_C.MODEL.RESNETS.DEFORM_ON_PER_STAGE = [False, False, False, False] -# Use True to use modulated deform_conv (DeformableV2, https://arxiv.org/abs/1811.11168); -# Use False for DeformableV1. -_C.MODEL.RESNETS.DEFORM_MODULATED = False -# Number of groups in deformable conv. -_C.MODEL.RESNETS.DEFORM_NUM_GROUPS = 1 - - -# ---------------------------------------------------------------------------- # -# Solver -# ---------------------------------------------------------------------------- # -_C.SOLVER = CN() - -# Options: WarmupMultiStepLR, WarmupCosineLR. -# See detectron2/solver/build.py for definition. -_C.SOLVER.LR_SCHEDULER_NAME = "WarmupMultiStepLR" - -_C.SOLVER.MAX_ITER = 40000 - -_C.SOLVER.BASE_LR = 0.001 -# The end lr, only used by WarmupCosineLR -_C.SOLVER.BASE_LR_END = 0.0 - -_C.SOLVER.MOMENTUM = 0.9 - -_C.SOLVER.NESTEROV = False - -_C.SOLVER.WEIGHT_DECAY = 0.0001 -# The weight decay that's applied to parameters of normalization layers -# (typically the affine transformation) -_C.SOLVER.WEIGHT_DECAY_NORM = 0.0 - -_C.SOLVER.GAMMA = 0.1 -# The iteration number to decrease learning rate by GAMMA. -_C.SOLVER.STEPS = (30000,) -# Number of decays in WarmupStepWithFixedGammaLR schedule -_C.SOLVER.NUM_DECAYS = 3 - -_C.SOLVER.WARMUP_FACTOR = 1.0 / 1000 -_C.SOLVER.WARMUP_ITERS = 1000 -_C.SOLVER.WARMUP_METHOD = "linear" -# Whether to rescale the interval for the learning schedule after warmup -_C.SOLVER.RESCALE_INTERVAL = False - -# Save a checkpoint after every this number of iterations -_C.SOLVER.CHECKPOINT_PERIOD = 5000 - -# Number of images per batch across all machines. This is also the number -# of training images per step (i.e. per iteration). If we use 16 GPUs -# and IMS_PER_BATCH = 32, each GPU will see 2 images per batch. -# May be adjusted automatically if REFERENCE_WORLD_SIZE is set. -_C.SOLVER.IMS_PER_BATCH = 16 - -# The reference number of workers (GPUs) this config is meant to train with. -# It takes no effect when set to 0. -# With a non-zero value, it will be used by DefaultTrainer to compute a desired -# per-worker batch size, and then scale the other related configs (total batch size, -# learning rate, etc) to match the per-worker batch size. -# See documentation of `DefaultTrainer.auto_scale_workers` for details: -_C.SOLVER.REFERENCE_WORLD_SIZE = 0 - -# Detectron v1 (and previous detection code) used a 2x higher LR and 0 WD for -# biases. This is not useful (at least for recent models). You should avoid -# changing these and they exist only to reproduce Detectron v1 training if -# desired. -_C.SOLVER.BIAS_LR_FACTOR = 1.0 -_C.SOLVER.WEIGHT_DECAY_BIAS = None # None means following WEIGHT_DECAY - -# Gradient clipping -_C.SOLVER.CLIP_GRADIENTS = CN({"ENABLED": False}) -# Type of gradient clipping, currently 2 values are supported: -# - "value": the absolute values of elements of each gradients are clipped -# - "norm": the norm of the gradient for each parameter is clipped thus -# affecting all elements in the parameter -_C.SOLVER.CLIP_GRADIENTS.CLIP_TYPE = "value" -# Maximum absolute value used for clipping gradients -_C.SOLVER.CLIP_GRADIENTS.CLIP_VALUE = 1.0 -# Floating point number p for L-p norm to be used with the "norm" -# gradient clipping type; for L-inf, please specify .inf -_C.SOLVER.CLIP_GRADIENTS.NORM_TYPE = 2.0 - -# Enable automatic mixed precision for training -# Note that this does not change model's inference behavior. -# To use AMP in inference, run inference under autocast() -_C.SOLVER.AMP = CN({"ENABLED": False}) - -# ---------------------------------------------------------------------------- # -# Specific test options -# ---------------------------------------------------------------------------- # -_C.TEST = CN() -# For end-to-end tests to verify the expected accuracy. -# Each item is [task, metric, value, tolerance] -# e.g.: [['bbox', 'AP', 38.5, 0.2]] -_C.TEST.EXPECTED_RESULTS = [] -# The period (in terms of steps) to evaluate the model during training. -# Set to 0 to disable. -_C.TEST.EVAL_PERIOD = 0 -# The sigmas used to calculate keypoint OKS. See http://cocodataset.org/#keypoints-eval -# When empty, it will use the defaults in COCO. -# Otherwise it should be a list[float] with the same length as ROI_KEYPOINT_HEAD.NUM_KEYPOINTS. -_C.TEST.KEYPOINT_OKS_SIGMAS = [] -# Maximum number of detections to return per image during inference (100 is -# based on the limit established for the COCO dataset). -_C.TEST.DETECTIONS_PER_IMAGE = 100 - -_C.TEST.AUG = CN({"ENABLED": False}) -_C.TEST.AUG.MIN_SIZES = (400, 500, 600, 700, 800, 900, 1000, 1100, 1200) -_C.TEST.AUG.MAX_SIZE = 4000 -_C.TEST.AUG.FLIP = True - -_C.TEST.PRECISE_BN = CN({"ENABLED": False}) -_C.TEST.PRECISE_BN.NUM_ITER = 200 - -# ---------------------------------------------------------------------------- # -# Misc options -# ---------------------------------------------------------------------------- # -# Directory where output files are written -_C.OUTPUT_DIR = "./output" -# Set seed to negative to fully randomize everything. -# Set seed to positive to use a fixed seed. Note that a fixed seed increases -# reproducibility but does not guarantee fully deterministic behavior. -# Disabling all parallelism further increases reproducibility. -_C.SEED = -1 -# Benchmark different cudnn algorithms. -# If input images have very different sizes, this option will have large overhead -# for about 10k iterations. It usually hurts total time, but can benefit for certain models. -# If input images have the same or similar sizes, benchmark is often helpful. -_C.CUDNN_BENCHMARK = False -# The period (in terms of steps) for minibatch visualization at train time. -# Set to 0 to disable. -_C.VIS_PERIOD = 0 - -# global config is for quick hack purposes. -# You can set them in command line or config files, -# and access it with: -# -# from detectron2.config import global_cfg -# print(global_cfg.HACK) -# -# Do not commit any configs into it. -_C.GLOBAL = CN() -_C.GLOBAL.HACK = 1.0 diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/MViTv2/configs/cascade_mask_rcnn_mvitv2_l_in21k_lsj_50ep.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/MViTv2/configs/cascade_mask_rcnn_mvitv2_l_in21k_lsj_50ep.py deleted file mode 100644 index 38da8958e0174d378555887d72a9956f4b3f8e58..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/MViTv2/configs/cascade_mask_rcnn_mvitv2_l_in21k_lsj_50ep.py +++ /dev/null @@ -1,31 +0,0 @@ -from fvcore.common.param_scheduler import MultiStepParamScheduler - -from detectron2.config import LazyCall as L -from detectron2.solver import WarmupParamScheduler - -from .cascade_mask_rcnn_mvitv2_b_3x import model, optimizer, train -from .common.coco_loader_lsj import dataloader - - -model.backbone.bottom_up.embed_dim = 144 -model.backbone.bottom_up.depth = 48 -model.backbone.bottom_up.num_heads = 2 -model.backbone.bottom_up.last_block_indexes = (1, 7, 43, 47) -model.backbone.bottom_up.drop_path_rate = 0.5 - -train.init_checkpoint = "detectron2://ImageNetPretrained/mvitv2/MViTv2_L_in21k.pyth" - -# Schedule -# 50ep = 184375 // 2 iters * 64 images/iter / 118000 images/ep -train.max_iter = 184375 // 2 -lr_multiplier = L(WarmupParamScheduler)( - scheduler=L(MultiStepParamScheduler)( - values=[1.0, 0.1, 0.01], - milestones=[163889 // 2, 177546 // 2], - num_updates=train.max_iter, - ), - warmup_length=250 / train.max_iter, - warmup_factor=0.001, -) - -optimizer.lr = 1e-4 diff --git a/spaces/camel-ai/camel-data-explorer/apps/data_explorer/downloader.py b/spaces/camel-ai/camel-data-explorer/apps/data_explorer/downloader.py deleted file mode 100644 index 2dc81fae085ea5e906bce9e4f8b385c662ceb05c..0000000000000000000000000000000000000000 --- a/spaces/camel-ai/camel-data-explorer/apps/data_explorer/downloader.py +++ /dev/null @@ -1,56 +0,0 @@ -# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. =========== -# Licensed under the Apache License, Version 2.0 (the “License”); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an “AS IS” BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. =========== -import os -import urllib.request - -from huggingface_hub import hf_hub_download -from huggingface_hub.utils._errors import RepositoryNotFoundError - -REPO_ROOT = os.path.realpath( - os.path.join(os.path.dirname(os.path.abspath(__file__)), "../..")) - - -def download_data(): - - print("Downloading...") - - data_dir = os.path.join(REPO_ROOT, "datasets/") - - os.makedirs(data_dir, exist_ok=True) - - try: - hf_hub_download(repo_id="camel-ai/ai_society", repo_type="dataset", - filename="ai_society_chat.zip", local_dir=data_dir, - local_dir_use_symlinks=False) - - hf_hub_download(repo_id="camel-ai/code", repo_type="dataset", - filename="code_chat.zip", local_dir=data_dir, - local_dir_use_symlinks=False) - except RepositoryNotFoundError: - for name in ("ai_society_chat.zip", "code_chat.zip"): - data_url = ("https://storage.googleapis.com/" - f"camel-bucket/datasets/private/{name}") - file_path = os.path.join(data_dir, os.path.split(data_url)[1]) - urllib.request.urlretrieve(data_url, file_path) - - data_url = ("https://storage.googleapis.com/" - "camel-bucket/datasets/private/misalignment.zip") - file_path = os.path.join(data_dir, os.path.split(data_url)[1]) - urllib.request.urlretrieve(data_url, file_path) - - print("Download done") - - -if __name__ == "__main__": - download_data() diff --git a/spaces/candlend/vits-hoshimi/sovits/vdecoder/parallel_wavegan/layers/tf_layers.py b/spaces/candlend/vits-hoshimi/sovits/vdecoder/parallel_wavegan/layers/tf_layers.py deleted file mode 100644 index c0f46bd755c161cda2ac904fe37f3f3c6357a88d..0000000000000000000000000000000000000000 --- a/spaces/candlend/vits-hoshimi/sovits/vdecoder/parallel_wavegan/layers/tf_layers.py +++ /dev/null @@ -1,129 +0,0 @@ -# -*- coding: utf-8 -*- - -# Copyright 2020 MINH ANH (@dathudeptrai) -# MIT License (https://opensource.org/licenses/MIT) - -"""Tensorflow Layer modules complatible with pytorch.""" - -import tensorflow as tf - - -class TFReflectionPad1d(tf.keras.layers.Layer): - """Tensorflow ReflectionPad1d module.""" - - def __init__(self, padding_size): - """Initialize TFReflectionPad1d module. - - Args: - padding_size (int): Padding size. - - """ - super(TFReflectionPad1d, self).__init__() - self.padding_size = padding_size - - @tf.function - def call(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Input tensor (B, T, 1, C). - - Returns: - Tensor: Padded tensor (B, T + 2 * padding_size, 1, C). - - """ - return tf.pad(x, [[0, 0], [self.padding_size, self.padding_size], [0, 0], [0, 0]], "REFLECT") - - -class TFConvTranspose1d(tf.keras.layers.Layer): - """Tensorflow ConvTranspose1d module.""" - - def __init__(self, channels, kernel_size, stride, padding): - """Initialize TFConvTranspose1d( module. - - Args: - channels (int): Number of channels. - kernel_size (int): kernel size. - strides (int): Stride width. - padding (str): Padding type ("same" or "valid"). - - """ - super(TFConvTranspose1d, self).__init__() - self.conv1d_transpose = tf.keras.layers.Conv2DTranspose( - filters=channels, - kernel_size=(kernel_size, 1), - strides=(stride, 1), - padding=padding, - ) - - @tf.function - def call(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Input tensor (B, T, 1, C). - - Returns: - Tensors: Output tensor (B, T', 1, C'). - - """ - x = self.conv1d_transpose(x) - return x - - -class TFResidualStack(tf.keras.layers.Layer): - """Tensorflow ResidualStack module.""" - - def __init__(self, - kernel_size, - channels, - dilation, - bias, - nonlinear_activation, - nonlinear_activation_params, - padding, - ): - """Initialize TFResidualStack module. - - Args: - kernel_size (int): Kernel size. - channles (int): Number of channels. - dilation (int): Dilation ine. - bias (bool): Whether to add bias parameter in convolution layers. - nonlinear_activation (str): Activation function module name. - nonlinear_activation_params (dict): Hyperparameters for activation function. - padding (str): Padding type ("same" or "valid"). - - """ - super(TFResidualStack, self).__init__() - self.block = [ - getattr(tf.keras.layers, nonlinear_activation)(**nonlinear_activation_params), - TFReflectionPad1d(dilation), - tf.keras.layers.Conv2D( - filters=channels, - kernel_size=(kernel_size, 1), - dilation_rate=(dilation, 1), - use_bias=bias, - padding="valid", - ), - getattr(tf.keras.layers, nonlinear_activation)(**nonlinear_activation_params), - tf.keras.layers.Conv2D(filters=channels, kernel_size=1, use_bias=bias) - ] - self.shortcut = tf.keras.layers.Conv2D(filters=channels, kernel_size=1, use_bias=bias) - - @tf.function - def call(self, x): - """Calculate forward propagation. - - Args: - x (Tensor): Input tensor (B, T, 1, C). - - Returns: - Tensor: Output tensor (B, T, 1, C). - - """ - _x = tf.identity(x) - for i, layer in enumerate(self.block): - _x = layer(_x) - shortcut = self.shortcut(x) - return shortcut + _x diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/projects/__init__.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/projects/__init__.py deleted file mode 100644 index a68207db4ee3c2578e1042b00b3071a946b7adea..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/projects/__init__.py +++ /dev/null @@ -1,31 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import importlib -from pathlib import Path - -_PROJECTS = { - "point_rend": "PointRend", - "deeplab": "DeepLab", - "panoptic_deeplab": "Panoptic-DeepLab", -} -_PROJECT_ROOT = Path(__file__).resolve().parent.parent.parent / "projects" - -if _PROJECT_ROOT.is_dir(): - # This is true only for in-place installation (pip install -e, setup.py develop), - # where setup(package_dir=) does not work: https://github.com/pypa/setuptools/issues/230 - - class _D2ProjectsFinder(importlib.abc.MetaPathFinder): - def find_spec(self, name, path, target=None): - if not name.startswith("detectron2.projects."): - return - project_name = name.split(".")[-1] - project_dir = _PROJECTS.get(project_name) - if not project_dir: - return - target_file = _PROJECT_ROOT / f"{project_dir}/{project_name}/__init__.py" - if not target_file.is_file(): - return - return importlib.util.spec_from_file_location(name, target_file) - - import sys - - sys.meta_path.append(_D2ProjectsFinder()) diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/Panoptic-DeepLab/train_net.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/Panoptic-DeepLab/train_net.py deleted file mode 100644 index 780764f22fe8f4d52f218748dc64cf6c609e87b9..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/Panoptic-DeepLab/train_net.py +++ /dev/null @@ -1,171 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. - -""" -Panoptic-DeepLab Training Script. -This script is a simplified version of the training script in detectron2/tools. -""" - -import os -import torch - -import detectron2.data.transforms as T -from detectron2.checkpoint import DetectionCheckpointer -from detectron2.config import get_cfg -from detectron2.data import MetadataCatalog, build_detection_train_loader -from detectron2.engine import DefaultTrainer, default_argument_parser, default_setup, launch -from detectron2.evaluation import ( - CityscapesInstanceEvaluator, - CityscapesSemSegEvaluator, - COCOEvaluator, - COCOPanopticEvaluator, - DatasetEvaluators, -) -from detectron2.projects.deeplab import build_lr_scheduler -from detectron2.projects.panoptic_deeplab import ( - PanopticDeeplabDatasetMapper, - add_panoptic_deeplab_config, -) -from detectron2.solver import get_default_optimizer_params -from detectron2.solver.build import maybe_add_gradient_clipping - - -def build_sem_seg_train_aug(cfg): - augs = [ - T.ResizeShortestEdge( - cfg.INPUT.MIN_SIZE_TRAIN, cfg.INPUT.MAX_SIZE_TRAIN, cfg.INPUT.MIN_SIZE_TRAIN_SAMPLING - ) - ] - if cfg.INPUT.CROP.ENABLED: - augs.append(T.RandomCrop(cfg.INPUT.CROP.TYPE, cfg.INPUT.CROP.SIZE)) - augs.append(T.RandomFlip()) - return augs - - -class Trainer(DefaultTrainer): - """ - We use the "DefaultTrainer" which contains a number pre-defined logic for - standard training workflow. They may not work for you, especially if you - are working on a new research project. In that case you can use the cleaner - "SimpleTrainer", or write your own training loop. - """ - - @classmethod - def build_evaluator(cls, cfg, dataset_name, output_folder=None): - """ - Create evaluator(s) for a given dataset. - This uses the special metadata "evaluator_type" associated with each builtin dataset. - For your own dataset, you can simply create an evaluator manually in your - script and do not have to worry about the hacky if-else logic here. - """ - if cfg.MODEL.PANOPTIC_DEEPLAB.BENCHMARK_NETWORK_SPEED: - return None - if output_folder is None: - output_folder = os.path.join(cfg.OUTPUT_DIR, "inference") - evaluator_list = [] - evaluator_type = MetadataCatalog.get(dataset_name).evaluator_type - if evaluator_type in ["cityscapes_panoptic_seg", "coco_panoptic_seg"]: - evaluator_list.append(COCOPanopticEvaluator(dataset_name, output_folder)) - if evaluator_type == "cityscapes_panoptic_seg": - evaluator_list.append(CityscapesSemSegEvaluator(dataset_name)) - evaluator_list.append(CityscapesInstanceEvaluator(dataset_name)) - if evaluator_type == "coco_panoptic_seg": - # `thing_classes` in COCO panoptic metadata includes both thing and - # stuff classes for visualization. COCOEvaluator requires metadata - # which only contains thing classes, thus we map the name of - # panoptic datasets to their corresponding instance datasets. - dataset_name_mapper = { - "coco_2017_val_panoptic": "coco_2017_val", - "coco_2017_val_100_panoptic": "coco_2017_val_100", - } - evaluator_list.append( - COCOEvaluator(dataset_name_mapper[dataset_name], output_dir=output_folder) - ) - if len(evaluator_list) == 0: - raise NotImplementedError( - "no Evaluator for the dataset {} with the type {}".format( - dataset_name, evaluator_type - ) - ) - elif len(evaluator_list) == 1: - return evaluator_list[0] - return DatasetEvaluators(evaluator_list) - - @classmethod - def build_train_loader(cls, cfg): - mapper = PanopticDeeplabDatasetMapper(cfg, augmentations=build_sem_seg_train_aug(cfg)) - return build_detection_train_loader(cfg, mapper=mapper) - - @classmethod - def build_lr_scheduler(cls, cfg, optimizer): - """ - It now calls :func:`detectron2.solver.build_lr_scheduler`. - Overwrite it if you'd like a different scheduler. - """ - return build_lr_scheduler(cfg, optimizer) - - @classmethod - def build_optimizer(cls, cfg, model): - """ - Build an optimizer from config. - """ - params = get_default_optimizer_params( - model, - weight_decay=cfg.SOLVER.WEIGHT_DECAY, - weight_decay_norm=cfg.SOLVER.WEIGHT_DECAY_NORM, - ) - - optimizer_type = cfg.SOLVER.OPTIMIZER - if optimizer_type == "SGD": - return maybe_add_gradient_clipping(cfg, torch.optim.SGD)( - params, - cfg.SOLVER.BASE_LR, - momentum=cfg.SOLVER.MOMENTUM, - nesterov=cfg.SOLVER.NESTEROV, - ) - elif optimizer_type == "ADAM": - return maybe_add_gradient_clipping(cfg, torch.optim.Adam)(params, cfg.SOLVER.BASE_LR) - else: - raise NotImplementedError(f"no optimizer type {optimizer_type}") - - -def setup(args): - """ - Create configs and perform basic setups. - """ - cfg = get_cfg() - add_panoptic_deeplab_config(cfg) - cfg.merge_from_file(args.config_file) - cfg.merge_from_list(args.opts) - cfg.freeze() - default_setup(cfg, args) - return cfg - - -def main(args): - cfg = setup(args) - - if args.eval_only: - model = Trainer.build_model(cfg) - DetectionCheckpointer(model, save_dir=cfg.OUTPUT_DIR).resume_or_load( - cfg.MODEL.WEIGHTS, resume=args.resume - ) - res = Trainer.test(cfg, model) - return res - - trainer = Trainer(cfg) - trainer.resume_or_load(resume=args.resume) - return trainer.train() - - -if __name__ == "__main__": - args = default_argument_parser().parse_args() - print("Command Line Args:", args) - launch( - main, - args.num_gpus, - num_machines=args.num_machines, - machine_rank=args.machine_rank, - dist_url=args.dist_url, - args=(args,), - ) diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/PointSup/point_sup/dataset_mapper.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/PointSup/point_sup/dataset_mapper.py deleted file mode 100644 index 52b9bd4ce19d51e07f98aa9adf36c41f6ddc22af..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/PointSup/point_sup/dataset_mapper.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import copy -import logging -import numpy as np -from typing import List, Union -import torch - -import detectron2.data.detection_utils as utils -import detectron2.data.transforms as T -from detectron2.config import configurable - -from .detection_utils import annotations_to_instances, transform_instance_annotations - -__all__ = [ - "PointSupDatasetMapper", -] - - -class PointSupDatasetMapper: - """ - The callable currently does the following: - 1. Read the image from "file_name" - 2. Applies transforms to the image and annotations - 3. Prepare data and annotations to Tensor and :class:`Instances` - """ - - @configurable - def __init__( - self, - is_train: bool, - *, - augmentations: List[Union[T.Augmentation, T.Transform]], - image_format: str, - # Extra data augmentation for point supervision - sample_points: int = 0, - ): - """ - NOTE: this interface is experimental. - - Args: - is_train: whether it's used in training or inference - augmentations: a list of augmentations or deterministic transforms to apply - image_format: an image format supported by :func:`detection_utils.read_image`. - sample_points: subsample points at each iteration - """ - # fmt: off - self.is_train = is_train - self.augmentations = T.AugmentationList(augmentations) - self.image_format = image_format - self.sample_points = sample_points - # fmt: on - logger = logging.getLogger(__name__) - mode = "training" if is_train else "inference" - logger.info(f"[DatasetMapper] Augmentations used in {mode}: {augmentations}") - logger.info(f"Point Augmentations used in {mode}: sample {sample_points} points") - - @classmethod - def from_config(cls, cfg, is_train: bool = True): - augs = utils.build_augmentation(cfg, is_train) - if cfg.INPUT.CROP.ENABLED and is_train: - raise ValueError("Crop augmentation not supported to point supervision.") - - ret = { - "is_train": is_train, - "augmentations": augs, - "image_format": cfg.INPUT.FORMAT, - "sample_points": cfg.INPUT.SAMPLE_POINTS, - } - - return ret - - def __call__(self, dataset_dict): - """ - Args: - dataset_dict (dict): Metadata of one image, in Detectron2 Dataset format. - Returns: - dict: a format that builtin models in detectron2 accept - """ - dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below - image = utils.read_image(dataset_dict["file_name"], format=self.image_format) - utils.check_image_size(dataset_dict, image) - - aug_input = T.AugInput(image) - transforms = self.augmentations(aug_input) - image = aug_input.image - - image_shape = image.shape[:2] # h, w - # Pytorch's dataloader is efficient on torch.Tensor due to shared-memory, - # but not efficient on large generic data structures due to the use of pickle & mp.Queue. - # Therefore it's important to use torch.Tensor. - dataset_dict["image"] = torch.as_tensor(np.ascontiguousarray(image.transpose(2, 0, 1))) - - if not self.is_train: - dataset_dict.pop("annotations", None) - return dataset_dict - - if "annotations" in dataset_dict: - # Maps points from the closed interval [0, image_size - 1] on discrete - # image coordinates to the half-open interval [x1, x2) on continuous image - # coordinates. We use the continuous-discrete conversion from Heckbert - # 1990 ("What is the coordinate of a pixel?"): d = floor(c) and c = d + 0.5, - # where d is a discrete coordinate and c is a continuous coordinate. - for ann in dataset_dict["annotations"]: - point_coords_wrt_image = np.array(ann["point_coords"]).astype(np.float) - point_coords_wrt_image = point_coords_wrt_image + 0.5 - ann["point_coords"] = point_coords_wrt_image - - annos = [ - # also need to transform point coordinates - transform_instance_annotations( - obj, - transforms, - image_shape, - ) - for obj in dataset_dict.pop("annotations") - if obj.get("iscrowd", 0) == 0 - ] - instances = annotations_to_instances( - annos, - image_shape, - sample_points=self.sample_points, - ) - - dataset_dict["instances"] = utils.filter_empty_instances(instances) - return dataset_dict diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/demo/ncnn/android/app/src/main/java/com/megvii/yoloXncnn/yoloXncnn.java b/spaces/chendl/compositional_test/multimodal/YOLOX/demo/ncnn/android/app/src/main/java/com/megvii/yoloXncnn/yoloXncnn.java deleted file mode 100644 index 212e1c2b881b89c69f27211160df0d2c61a098d8..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/YOLOX/demo/ncnn/android/app/src/main/java/com/megvii/yoloXncnn/yoloXncnn.java +++ /dev/null @@ -1,27 +0,0 @@ -// Copyright (C) Megvii, Inc. and its affiliates. All rights reserved. - -package com.megvii.yoloXncnn; - -import android.content.res.AssetManager; -import android.graphics.Bitmap; - -public class YOLOXncnn -{ - public native boolean Init(AssetManager mgr); - - public class Obj - { - public float x; - public float y; - public float w; - public float h; - public String label; - public float prob; - } - - public native Obj[] Detect(Bitmap bitmap, boolean use_gpu); - - static { - System.loadLibrary("yoloXncnn"); - } -} diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/quantization-qdqbert/ort-infer-benchmark.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/quantization-qdqbert/ort-infer-benchmark.py deleted file mode 100644 index bb0436c125800bb4be99d1d3fc63486c7b6e4ea4..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/quantization-qdqbert/ort-infer-benchmark.py +++ /dev/null @@ -1,50 +0,0 @@ -import os -import time - -import numpy as np -import onnxruntime as ort - - -os.environ["ORT_TENSORRT_INT8_ENABLE"] = "1" -os.environ["ORT_TENSORRT_INT8_USE_NATIVE_CALIBRATION_TABLE"] = "0" -os.environ["ORT_TENSORRT_ENGINE_CACHE_ENABLE"] = "1" - -sess_opt = ort.SessionOptions() -sess_opt.graph_optimization_level = ort.GraphOptimizationLevel.ORT_DISABLE_ALL -print("Create inference session...") -execution_provider = ["TensorrtExecutionProvider", "CUDAExecutionProvider"] -sess = ort.InferenceSession("model.onnx", sess_options=sess_opt, providers=execution_provider) -run_opt = ort.RunOptions() - -sequence = 128 -batch = 1 -input_ids = np.ones((batch, sequence), dtype=np.int64) -attention_mask = np.ones((batch, sequence), dtype=np.int64) -token_type_ids = np.ones((batch, sequence), dtype=np.int64) - -print("Warm up phase...") -sess.run( - None, - { - sess.get_inputs()[0].name: input_ids, - sess.get_inputs()[1].name: attention_mask, - sess.get_inputs()[2].name: token_type_ids, - }, - run_options=run_opt, -) - -print("Start inference...") -start_time = time.time() -max_iters = 2000 -predict = {} -for iter in range(max_iters): - predict = sess.run( - None, - { - sess.get_inputs()[0].name: input_ids, - sess.get_inputs()[1].name: attention_mask, - sess.get_inputs()[2].name: token_type_ids, - }, - run_options=run_opt, - ) -print("Average Inference Time = {:.3f} ms".format((time.time() - start_time) * 1000 / max_iters)) diff --git a/spaces/chongjie/MCC_slim/util/co3d_utils.py b/spaces/chongjie/MCC_slim/util/co3d_utils.py deleted file mode 100644 index effde61df17cda0df2e28e63521a859ff4dabca1..0000000000000000000000000000000000000000 --- a/spaces/chongjie/MCC_slim/util/co3d_utils.py +++ /dev/null @@ -1,133 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import glob -from omegaconf import DictConfig -from typing import Optional - -import torch - -from pytorch3d.implicitron.dataset.dataset_map_provider import DatasetMap -from pytorch3d.implicitron.dataset.json_index_dataset_map_provider_v2 import ( - JsonIndexDatasetMapProviderV2 -) -from pytorch3d.implicitron.tools.config import expand_args_fields -from pytorch3d.io import IO -from pytorch3d.renderer import ( - NDCMultinomialRaysampler, - ray_bundle_to_ray_points, -) -from pytorch3d.renderer.cameras import CamerasBase -from pytorch3d.structures import Pointclouds - - -HOLDOUT_CATEGORIES = set([ - 'apple', - 'baseballglove', - 'cup', - 'ball', - 'toyplane', - 'handbag', - 'book', - 'carrot', - 'suitcase', - 'bowl', -]) - -def get_dataset_map( - dataset_root: str, - category: str, - subset_name: str, -) -> DatasetMap: - """ - Obtain the dataset map that contains the train/val/test dataset objects. - """ - expand_args_fields(JsonIndexDatasetMapProviderV2) - dataset_map_provider = JsonIndexDatasetMapProviderV2( - category=category, - subset_name=subset_name, - dataset_root=dataset_root, - test_on_train=False, - only_test_set=False, - load_eval_batches=True, - dataset_JsonIndexDataset_args=DictConfig({"remove_empty_masks": False, "load_point_clouds": False}), - ) - return dataset_map_provider.get_dataset_map() - - -def _load_pointcloud(pcl_path, max_points): - pcl = IO().load_pointcloud(pcl_path) - if max_points > 0: - pcl = pcl.subsample(max_points) - - return pcl - - -def get_all_dataset_maps(co3d_path, holdout_categories): - all_categories = [c.split('/')[-1] for c in list(glob.glob(co3d_path + '/*')) if not c.endswith('.json')] - all_categories = sorted(all_categories, key=lambda x: hash(x)) - - # Obtain the CO3Dv2 dataset map - train_dataset_maps = {} - val_dataset_maps = {} - for category in all_categories: - - print(f'Loading dataset map ({category})') - dataset_map = { - 'train': torch.load(f'dataset_cache/{category}_train.pt'), - 'val': torch.load(f'dataset_cache/{category}_val.pt') - } - if not holdout_categories or category not in HOLDOUT_CATEGORIES: - train_dataset_maps[category] = dataset_map['train'] - if not holdout_categories or category in HOLDOUT_CATEGORIES: - val_dataset_maps[category] = dataset_map['val'] - - print('Loaded', len(train_dataset_maps), 'categores for train') - print('Loaded', len(val_dataset_maps), 'categores for val') - return train_dataset_maps, val_dataset_maps - - -def get_rgbd_points( - imh, imw, - camera: CamerasBase, - depth_map: torch.Tensor, - mask: Optional[torch.Tensor] = None, - mask_thr: float = 0.5, -) -> Pointclouds: - """ - Given a batch of images, depths, masks and cameras, generate a colored - point cloud by unprojecting depth maps to the and coloring with the source - pixel colors. - """ - depth_map = torch.nn.functional.interpolate( - depth_map, - size=[imh, imw], - mode="bilinear", - align_corners=False, - ) - # convert the depth maps to point clouds using the grid ray sampler - pts_3d = ray_bundle_to_ray_points( - NDCMultinomialRaysampler( - image_width=imw, - image_height=imh, - n_pts_per_ray=1, - min_depth=1.0, - max_depth=1.0, - )(camera)._replace(lengths=depth_map[:, 0, ..., None]) - ).squeeze(3)[None] - - pts_mask = depth_map > 0.0 - if mask is not None: - mask = torch.nn.functional.interpolate( - mask, - size=[imh, imw], - mode="bilinear", - align_corners=False, - ) - pts_mask *= mask > mask_thr - pts_3d[~pts_mask] = float('inf') - return pts_3d.squeeze(0).squeeze(0) - diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/faiss/contrib/datasets.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/faiss/contrib/datasets.py deleted file mode 100644 index c06cd9bb26e61f8cadad885908bc476fe45d16c9..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/faiss/contrib/datasets.py +++ /dev/null @@ -1,348 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import os -import numpy as np -import faiss - -from .vecs_io import fvecs_read, ivecs_read, bvecs_mmap, fvecs_mmap -from .exhaustive_search import knn - -class Dataset: - """ Generic abstract class for a test dataset """ - - def __init__(self): - """ the constructor should set the following fields: """ - self.d = -1 - self.metric = 'L2' # or IP - self.nq = -1 - self.nb = -1 - self.nt = -1 - - def get_queries(self): - """ return the queries as a (nq, d) array """ - raise NotImplementedError() - - def get_train(self, maxtrain=None): - """ return the queries as a (nt, d) array """ - raise NotImplementedError() - - def get_database(self): - """ return the queries as a (nb, d) array """ - raise NotImplementedError() - - def database_iterator(self, bs=128, split=(1, 0)): - """returns an iterator on database vectors. - bs is the number of vectors per batch - split = (nsplit, rank) means the dataset is split in nsplit - shards and we want shard number rank - The default implementation just iterates over the full matrix - returned by get_dataset. - """ - xb = self.get_database() - nsplit, rank = split - i0, i1 = self.nb * rank // nsplit, self.nb * (rank + 1) // nsplit - for j0 in range(i0, i1, bs): - yield xb[j0: min(j0 + bs, i1)] - - def get_groundtruth(self, k=None): - """ return the ground truth for k-nearest neighbor search """ - raise NotImplementedError() - - def get_groundtruth_range(self, thresh=None): - """ return the ground truth for range search """ - raise NotImplementedError() - - def __str__(self): - return (f"dataset in dimension {self.d}, with metric {self.metric}, " - f"size: Q {self.nq} B {self.nb} T {self.nt}") - - def check_sizes(self): - """ runs the previous and checks the sizes of the matrices """ - assert self.get_queries().shape == (self.nq, self.d) - if self.nt > 0: - xt = self.get_train(maxtrain=123) - assert xt.shape == (123, self.d), "shape=%s" % (xt.shape, ) - assert self.get_database().shape == (self.nb, self.d) - assert self.get_groundtruth(k=13).shape == (self.nq, 13) - - -class SyntheticDataset(Dataset): - """A dataset that is not completely random but still challenging to - index - """ - - def __init__(self, d, nt, nb, nq, metric='L2', seed=1338): - Dataset.__init__(self) - self.d, self.nt, self.nb, self.nq = d, nt, nb, nq - d1 = 10 # intrinsic dimension (more or less) - n = nb + nt + nq - rs = np.random.RandomState(seed) - x = rs.normal(size=(n, d1)) - x = np.dot(x, rs.rand(d1, d)) - # now we have a d1-dim ellipsoid in d-dimensional space - # higher factor (>4) -> higher frequency -> less linear - x = x * (rs.rand(d) * 4 + 0.1) - x = np.sin(x) - x = x.astype('float32') - self.metric = metric - self.xt = x[:nt] - self.xb = x[nt:nt + nb] - self.xq = x[nt + nb:] - - def get_queries(self): - return self.xq - - def get_train(self, maxtrain=None): - maxtrain = maxtrain if maxtrain is not None else self.nt - return self.xt[:maxtrain] - - def get_database(self): - return self.xb - - def get_groundtruth(self, k=100): - return knn( - self.xq, self.xb, k, - faiss.METRIC_L2 if self.metric == 'L2' else faiss.METRIC_INNER_PRODUCT - )[1] - - -############################################################################ -# The following datasets are a few standard open-source datasets -# they should be stored in a directory, and we start by guessing where -# that directory is -############################################################################ - - -for dataset_basedir in ( - '/datasets01/simsearch/041218/', - '/mnt/vol/gfsai-flash3-east/ai-group/datasets/simsearch/'): - if os.path.exists(dataset_basedir): - break -else: - # users can link their data directory to `./data` - dataset_basedir = 'data/' - - -class DatasetSIFT1M(Dataset): - """ - The original dataset is available at: http://corpus-texmex.irisa.fr/ - (ANN_SIFT1M) - """ - - def __init__(self): - Dataset.__init__(self) - self.d, self.nt, self.nb, self.nq = 128, 100000, 1000000, 10000 - self.basedir = dataset_basedir + 'sift1M/' - - def get_queries(self): - return fvecs_read(self.basedir + "sift_query.fvecs") - - def get_train(self, maxtrain=None): - maxtrain = maxtrain if maxtrain is not None else self.nt - return fvecs_read(self.basedir + "sift_learn.fvecs")[:maxtrain] - - def get_database(self): - return fvecs_read(self.basedir + "sift_base.fvecs") - - def get_groundtruth(self, k=None): - gt = ivecs_read(self.basedir + "sift_groundtruth.ivecs") - if k is not None: - assert k <= 100 - gt = gt[:, :k] - return gt - - -def sanitize(x): - return np.ascontiguousarray(x, dtype='float32') - - -class DatasetBigANN(Dataset): - """ - The original dataset is available at: http://corpus-texmex.irisa.fr/ - (ANN_SIFT1B) - """ - - def __init__(self, nb_M=1000): - Dataset.__init__(self) - assert nb_M in (1, 2, 5, 10, 20, 50, 100, 200, 500, 1000) - self.nb_M = nb_M - nb = nb_M * 10**6 - self.d, self.nt, self.nb, self.nq = 128, 10**8, nb, 10000 - self.basedir = dataset_basedir + 'bigann/' - - def get_queries(self): - return sanitize(bvecs_mmap(self.basedir + 'bigann_query.bvecs')[:]) - - def get_train(self, maxtrain=None): - maxtrain = maxtrain if maxtrain is not None else self.nt - return sanitize(bvecs_mmap(self.basedir + 'bigann_learn.bvecs')[:maxtrain]) - - def get_groundtruth(self, k=None): - gt = ivecs_read(self.basedir + 'gnd/idx_%dM.ivecs' % self.nb_M) - if k is not None: - assert k <= 100 - gt = gt[:, :k] - return gt - - def get_database(self): - assert self.nb_M < 100, "dataset too large, use iterator" - return sanitize(bvecs_mmap(self.basedir + 'bigann_base.bvecs')[:self.nb]) - - def database_iterator(self, bs=128, split=(1, 0)): - xb = bvecs_mmap(self.basedir + 'bigann_base.bvecs') - nsplit, rank = split - i0, i1 = self.nb * rank // nsplit, self.nb * (rank + 1) // nsplit - for j0 in range(i0, i1, bs): - yield sanitize(xb[j0: min(j0 + bs, i1)]) - - -class DatasetDeep1B(Dataset): - """ - See - https://github.com/facebookresearch/faiss/tree/main/benchs#getting-deep1b - on how to get the data - """ - - def __init__(self, nb=10**9): - Dataset.__init__(self) - nb_to_name = { - 10**5: '100k', - 10**6: '1M', - 10**7: '10M', - 10**8: '100M', - 10**9: '1B' - } - assert nb in nb_to_name - self.d, self.nt, self.nb, self.nq = 96, 358480000, nb, 10000 - self.basedir = dataset_basedir + 'deep1b/' - self.gt_fname = "%sdeep%s_groundtruth.ivecs" % ( - self.basedir, nb_to_name[self.nb]) - - def get_queries(self): - return sanitize(fvecs_read(self.basedir + "deep1B_queries.fvecs")) - - def get_train(self, maxtrain=None): - maxtrain = maxtrain if maxtrain is not None else self.nt - return sanitize(fvecs_mmap(self.basedir + "learn.fvecs")[:maxtrain]) - - def get_groundtruth(self, k=None): - gt = ivecs_read(self.gt_fname) - if k is not None: - assert k <= 100 - gt = gt[:, :k] - return gt - - def get_database(self): - assert self.nb <= 10**8, "dataset too large, use iterator" - return sanitize(fvecs_mmap(self.basedir + "base.fvecs")[:self.nb]) - - def database_iterator(self, bs=128, split=(1, 0)): - xb = fvecs_mmap(self.basedir + "base.fvecs") - nsplit, rank = split - i0, i1 = self.nb * rank // nsplit, self.nb * (rank + 1) // nsplit - for j0 in range(i0, i1, bs): - yield sanitize(xb[j0: min(j0 + bs, i1)]) - - -class DatasetGlove(Dataset): - """ - Data from http://ann-benchmarks.com/glove-100-angular.hdf5 - """ - - def __init__(self, loc=None, download=False): - import h5py - assert not download, "not implemented" - if not loc: - loc = dataset_basedir + 'glove/glove-100-angular.hdf5' - self.glove_h5py = h5py.File(loc, 'r') - # IP and L2 are equivalent in this case, but it is traditionally seen as an IP dataset - self.metric = 'IP' - self.d, self.nt = 100, 0 - self.nb = self.glove_h5py['train'].shape[0] - self.nq = self.glove_h5py['test'].shape[0] - - def get_queries(self): - xq = np.array(self.glove_h5py['test']) - faiss.normalize_L2(xq) - return xq - - def get_database(self): - xb = np.array(self.glove_h5py['train']) - faiss.normalize_L2(xb) - return xb - - def get_groundtruth(self, k=None): - gt = self.glove_h5py['neighbors'] - if k is not None: - assert k <= 100 - gt = gt[:, :k] - return gt - - -class DatasetMusic100(Dataset): - """ - get dataset from - https://github.com/stanis-morozov/ip-nsw#dataset - """ - - def __init__(self): - Dataset.__init__(self) - self.d, self.nt, self.nb, self.nq = 100, 0, 10**6, 10000 - self.metric = 'IP' - self.basedir = dataset_basedir + 'music-100/' - - def get_queries(self): - xq = np.fromfile(self.basedir + 'query_music100.bin', dtype='float32') - xq = xq.reshape(-1, 100) - return xq - - def get_database(self): - xb = np.fromfile(self.basedir + 'database_music100.bin', dtype='float32') - xb = xb.reshape(-1, 100) - return xb - - def get_groundtruth(self, k=None): - gt = np.load(self.basedir + 'gt.npy') - if k is not None: - assert k <= 100 - gt = gt[:, :k] - return gt - - - -def dataset_from_name(dataset='deep1M', download=False): - """ converts a string describing a dataset to a Dataset object - Supports sift1M, bigann1M..bigann1B, deep1M..deep1B, music-100 and glove - """ - - if dataset == 'sift1M': - return DatasetSIFT1M() - - elif dataset.startswith('bigann'): - dbsize = 1000 if dataset == "bigann1B" else int(dataset[6:-1]) - return DatasetBigANN(nb_M=dbsize) - - elif dataset.startswith("deep"): - - szsuf = dataset[4:] - if szsuf[-1] == 'M': - dbsize = 10 ** 6 * int(szsuf[:-1]) - elif szsuf == '1B': - dbsize = 10 ** 9 - elif szsuf[-1] == 'k': - dbsize = 1000 * int(szsuf[:-1]) - else: - assert False, "did not recognize suffix " + szsuf - return DatasetDeep1B(nb=dbsize) - - elif dataset == "music-100": - return DatasetMusic100() - - elif dataset == "glove": - return DatasetGlove(download=download) - - else: - raise RuntimeError("unknown dataset " + dataset) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/plot.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/plot.py deleted file mode 100644 index 203d933afb8b5c13720b57072254c0509d17b381..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/components/plot.py +++ /dev/null @@ -1,159 +0,0 @@ -"""gr.Plot() component.""" - -from __future__ import annotations - -import json -from types import ModuleType -from typing import Any, Callable, Literal - -import altair as alt -import pandas as pd -from gradio_client.documentation import document, set_documentation_group -from gradio_client.serializing import JSONSerializable - -from gradio import processing_utils -from gradio.components.base import IOComponent, _Keywords -from gradio.deprecation import warn_style_method_deprecation -from gradio.events import Changeable, Clearable - -set_documentation_group("component") - - -@document() -class Plot(Changeable, Clearable, IOComponent, JSONSerializable): - """ - Used to display various kinds of plots (matplotlib, plotly, or bokeh are supported) - Preprocessing: this component does *not* accept input. - Postprocessing: expects either a {matplotlib.figure.Figure}, a {plotly.graph_objects._figure.Figure}, or a {dict} corresponding to a bokeh plot (json_item format) - - Demos: altair_plot, outbreak_forecast, blocks_kinematics, stock_forecast, map_airbnb - Guides: plot-component-for-maps - """ - - def __init__( - self, - value: Callable | None | pd.DataFrame = None, - *, - label: str | None = None, - every: float | None = None, - show_label: bool = True, - container: bool = True, - scale: int | None = None, - min_width: int = 160, - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - **kwargs, - ): - """ - Parameters: - value: Optionally, supply a default plot object to display, must be a matplotlib, plotly, altair, or bokeh figure, or a callable. If callable, the function will be called whenever the app loads to set the initial value of the component. - label: component name in interface. - every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute. - show_label: if True, will display label. - container: If True, will place the component in a container - providing some extra padding around the border. - scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer. - min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first. - visible: If False, component will be hidden. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - """ - IOComponent.__init__( - self, - label=label, - every=every, - show_label=show_label, - container=container, - scale=scale, - min_width=min_width, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - value=value, - **kwargs, - ) - - def get_config(self): - try: - import bokeh # type: ignore - - bokeh_version = bokeh.__version__ - except ImportError: - bokeh_version = None - return { - "value": self.value, - "bokeh_version": bokeh_version, - **IOComponent.get_config(self), - } - - @staticmethod - def update( - value: Any | Literal[_Keywords.NO_VALUE] | None = _Keywords.NO_VALUE, - label: str | None = None, - show_label: bool | None = None, - container: bool | None = None, - scale: int | None = None, - min_width: int | None = None, - visible: bool | None = None, - ): - updated_config = { - "label": label, - "show_label": show_label, - "container": container, - "scale": scale, - "min_width": min_width, - "visible": visible, - "value": value, - "__type__": "update", - } - return updated_config - - def postprocess(self, y) -> dict[str, str] | None: - """ - Parameters: - y: plot data - Returns: - plot type mapped to plot base64 data - """ - import matplotlib.figure - - if y is None: - return None - if isinstance(y, (ModuleType, matplotlib.figure.Figure)): # type: ignore - dtype = "matplotlib" - out_y = processing_utils.encode_plot_to_base64(y) - elif "bokeh" in y.__module__: - dtype = "bokeh" - from bokeh.embed import json_item # type: ignore - - out_y = json.dumps(json_item(y)) - else: - is_altair = "altair" in y.__module__ - dtype = "altair" if is_altair else "plotly" - out_y = y.to_json() - return {"type": dtype, "plot": out_y} - - def style(self, container: bool | None = None): - """ - This method is deprecated. Please set these arguments in the constructor instead. - """ - warn_style_method_deprecation() - if container is not None: - self.container = container - return self - - -class AltairPlot: - @staticmethod - def create_legend(position, title): - if position == "none": - legend = None - else: - position = {"orient": position} if position else {} - legend = {"title": title, **position} - - return legend - - @staticmethod - def create_scale(limit): - return alt.Scale(domain=limit) if limit else alt.Undefined diff --git a/spaces/cihyFjudo/fairness-paper-search/Gumroad Lecture My Work Castle Steps Learn How to Create Stunning Environments.md b/spaces/cihyFjudo/fairness-paper-search/Gumroad Lecture My Work Castle Steps Learn How to Create Stunning Environments.md deleted file mode 100644 index 5b759a0e7fa7728137ba5f095b9db4ea68e005d0..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Gumroad Lecture My Work Castle Steps Learn How to Create Stunning Environments.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Gumroad – Lecture – My Work Castle Steps


      Download File ✶✶✶ https://tinurli.com/2uwjch



      -
      - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/cihyFjudo/fairness-paper-search/Matlab R2015b Installation Key Crack Discover How to Install and Use Matlab R2015b for Free with a Cracked Key.md b/spaces/cihyFjudo/fairness-paper-search/Matlab R2015b Installation Key Crack Discover How to Install and Use Matlab R2015b for Free with a Cracked Key.md deleted file mode 100644 index eca08850b2accd799972fb3bb6a86c018609a3e0..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Matlab R2015b Installation Key Crack Discover How to Install and Use Matlab R2015b for Free with a Cracked Key.md +++ /dev/null @@ -1,7 +0,0 @@ -
      -

      It creates data base in the form of special codes. Users can easily manipulate their large memory data according to the storage capacity. Matlab R2016b Crack includes features to index and sync timetable data in tabular data format. Users can easily define functions matlab a script. It improves the readability to reuse codes. It support Java, C, and C++ codes. Matlab R2016b Crack is helpful to activation big algorithm data and create numerical charts in less time. Users can can embed and analyse codes after generating codes. Why users like Matlab R2016b License File? Matlab R2016b License Key is helpful to perform r2015b numerical tasks and solve business problems. It includes parallel computing and image processing toolbar. Its static and machine learning toolbox is helpful to solve advanced algorithm problems. These toolbox will solve all mathematical and codes management problems. It includes codes vitrifaction and validation tool. Which are System Requirements essential for Key R2016b Serial Key? Matlab R2016b System requirements are 1. Matlab R2016b for Linux is working simultaneously on all old and new Linux operating system. Matlab R2016b Download file size is less than others Photo editing software. Special Screenshots: Mathworks Matlab R2016b Crack + Activation Key Full Version Download From Links Given Below.

      -


      Descarga, instalación y activación de Matlab R2015b(32 y 64 bits) paso a paso
      While You may use it for algorithm development and data analysis. There are some Updates which are not in previous programs. Serial Key and Crack Full Version Additional tags: mathworks matlab r2013a matlab r2013a activation code matlab r2013a code matlab r2013a crack matlab r2013a download matlab r2013a free matlab r2013a free download matlab r2013a free full version matlab r2013a full matlab r2013a full download matlab r2013a key matlab r2013a keygen matlab r2013a keymaker matlab r2013a license matlab r2013a patch matlab r2013a serial key matlab r2013a serial number matlab r2013a torrent matlab r2013a torrent download. Browse to the location of the license file you downloaded earlier and click on Open. Its progressed scientific critical thinking library will explain all issues with no expert issues. Matlab R2015b Crack is all in one software that solves mathematical, numerical, camecial, and logarithm problems. Why users like Matlab R2015b License File? He is the first developer of mathematical computing software. They use these tools for accelerating the speed of their work, discovery, and innovation.

      -

      Matlab R2015b Installation Key Crack


      Downloadhttps://tinurli.com/2uwhYl



      -

      Create a new desktop profile with the file name Matab2015b.desktop: $ sudo nano/usr/share/applications/matlab2015b.desktop, which reads as follows: If installed elsewhere, change the/usr/ local/for the installation directory

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Tere Naam [2003 FLAC] Listen to the Hit Songs of Salman Khan and Bhumika Chawla.md b/spaces/cihyFjudo/fairness-paper-search/Tere Naam [2003 FLAC] Listen to the Hit Songs of Salman Khan and Bhumika Chawla.md deleted file mode 100644 index 8e6e430140c9c194812b12d0e20259303e41c79a..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Tere Naam [2003 FLAC] Listen to the Hit Songs of Salman Khan and Bhumika Chawla.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Tere Naam [2003 – FLAC] – A2ZCity.net


      Download Zip 🗹 https://tinurli.com/2uwjUU



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/cleanmaster/akagi-sovits3/vdecoder/hifigan/nvSTFT.py b/spaces/cleanmaster/akagi-sovits3/vdecoder/hifigan/nvSTFT.py deleted file mode 100644 index 88597d62a505715091f9ba62d38bf0a85a31b95a..0000000000000000000000000000000000000000 --- a/spaces/cleanmaster/akagi-sovits3/vdecoder/hifigan/nvSTFT.py +++ /dev/null @@ -1,111 +0,0 @@ -import math -import os -os.environ["LRU_CACHE_CAPACITY"] = "3" -import random -import torch -import torch.utils.data -import numpy as np -import librosa -from librosa.util import normalize -from librosa.filters import mel as librosa_mel_fn -from scipy.io.wavfile import read -import soundfile as sf - -def load_wav_to_torch(full_path, target_sr=None, return_empty_on_exception=False): - sampling_rate = None - try: - data, sampling_rate = sf.read(full_path, always_2d=True)# than soundfile. - except Exception as ex: - print(f"'{full_path}' failed to load.\nException:") - print(ex) - if return_empty_on_exception: - return [], sampling_rate or target_sr or 32000 - else: - raise Exception(ex) - - if len(data.shape) > 1: - data = data[:, 0] - assert len(data) > 2# check duration of audio file is > 2 samples (because otherwise the slice operation was on the wrong dimension) - - if np.issubdtype(data.dtype, np.integer): # if audio data is type int - max_mag = -np.iinfo(data.dtype).min # maximum magnitude = min possible value of intXX - else: # if audio data is type fp32 - max_mag = max(np.amax(data), -np.amin(data)) - max_mag = (2**31)+1 if max_mag > (2**15) else ((2**15)+1 if max_mag > 1.01 else 1.0) # data should be either 16-bit INT, 32-bit INT or [-1 to 1] float32 - - data = torch.FloatTensor(data.astype(np.float32))/max_mag - - if (torch.isinf(data) | torch.isnan(data)).any() and return_empty_on_exception:# resample will crash with inf/NaN inputs. return_empty_on_exception will return empty arr instead of except - return [], sampling_rate or target_sr or 32000 - if target_sr is not None and sampling_rate != target_sr: - data = torch.from_numpy(librosa.core.resample(data.numpy(), orig_sr=sampling_rate, target_sr=target_sr)) - sampling_rate = target_sr - - return data, sampling_rate - -def dynamic_range_compression(x, C=1, clip_val=1e-5): - return np.log(np.clip(x, a_min=clip_val, a_max=None) * C) - -def dynamic_range_decompression(x, C=1): - return np.exp(x) / C - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - return torch.log(torch.clamp(x, min=clip_val) * C) - -def dynamic_range_decompression_torch(x, C=1): - return torch.exp(x) / C - -class STFT(): - def __init__(self, sr=22050, n_mels=80, n_fft=1024, win_size=1024, hop_length=256, fmin=20, fmax=11025, clip_val=1e-5): - self.target_sr = sr - - self.n_mels = n_mels - self.n_fft = n_fft - self.win_size = win_size - self.hop_length = hop_length - self.fmin = fmin - self.fmax = fmax - self.clip_val = clip_val - self.mel_basis = {} - self.hann_window = {} - - def get_mel(self, y, center=False): - sampling_rate = self.target_sr - n_mels = self.n_mels - n_fft = self.n_fft - win_size = self.win_size - hop_length = self.hop_length - fmin = self.fmin - fmax = self.fmax - clip_val = self.clip_val - - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - if fmax not in self.mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=n_mels, fmin=fmin, fmax=fmax) - self.mel_basis[str(fmax)+'_'+str(y.device)] = torch.from_numpy(mel).float().to(y.device) - self.hann_window[str(y.device)] = torch.hann_window(self.win_size).to(y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_length)/2), int((n_fft-hop_length)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_length, win_length=win_size, window=self.hann_window[str(y.device)], - center=center, pad_mode='reflect', normalized=False, onesided=True) - # print(111,spec) - spec = torch.sqrt(spec.pow(2).sum(-1)+(1e-9)) - # print(222,spec) - spec = torch.matmul(self.mel_basis[str(fmax)+'_'+str(y.device)], spec) - # print(333,spec) - spec = dynamic_range_compression_torch(spec, clip_val=clip_val) - # print(444,spec) - return spec - - def __call__(self, audiopath): - audio, sr = load_wav_to_torch(audiopath, target_sr=self.target_sr) - spect = self.get_mel(audio.unsqueeze(0)).squeeze(0) - return spect - -stft = STFT() diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/_p_r_o_p.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/_p_r_o_p.py deleted file mode 100644 index aead9d72062e878d5e497f263a4f08eddbb048f6..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/_p_r_o_p.py +++ /dev/null @@ -1,6 +0,0 @@ -from .otBase import BaseTTXConverter - - -# https://developer.apple.com/fonts/TrueType-Reference-Manual/RM06/Chap6prop.html -class table__p_r_o_p(BaseTTXConverter): - pass diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/atrac1.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/atrac1.c deleted file mode 100644 index 1309bb95a2d3fa2becdcdc2bf17acbb0f2111aa2..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/atrac1.c +++ /dev/null @@ -1,404 +0,0 @@ -/* - * ATRAC1 compatible decoder - * Copyright (c) 2009 Maxim Poliakovski - * Copyright (c) 2009 Benjamin Larsson - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * ATRAC1 compatible decoder. - * This decoder handles raw ATRAC1 data and probably SDDS data. - */ - -/* Many thanks to Tim Craig for all the help! */ - -#include - -#include "libavutil/float_dsp.h" -#include "libavutil/mem_internal.h" -#include "libavutil/tx.h" - -#include "avcodec.h" -#include "codec_internal.h" -#include "decode.h" -#include "get_bits.h" -#include "sinewin.h" - -#include "atrac.h" -#include "atrac1data.h" - -#define AT1_MAX_BFU 52 ///< max number of block floating units in a sound unit -#define AT1_SU_SIZE 212 ///< number of bytes in a sound unit -#define AT1_SU_SAMPLES 512 ///< number of samples in a sound unit -#define AT1_FRAME_SIZE AT1_SU_SIZE * 2 -#define AT1_SU_MAX_BITS AT1_SU_SIZE * 8 -#define AT1_MAX_CHANNELS 2 - -#define AT1_QMF_BANDS 3 -#define IDX_LOW_BAND 0 -#define IDX_MID_BAND 1 -#define IDX_HIGH_BAND 2 - -/** - * Sound unit struct, one unit is used per channel - */ -typedef struct AT1SUCtx { - int log2_block_count[AT1_QMF_BANDS]; ///< log2 number of blocks in a band - int num_bfus; ///< number of Block Floating Units - float* spectrum[2]; - DECLARE_ALIGNED(32, float, spec1)[AT1_SU_SAMPLES]; ///< mdct buffer - DECLARE_ALIGNED(32, float, spec2)[AT1_SU_SAMPLES]; ///< mdct buffer - DECLARE_ALIGNED(32, float, fst_qmf_delay)[46]; ///< delay line for the 1st stacked QMF filter - DECLARE_ALIGNED(32, float, snd_qmf_delay)[46]; ///< delay line for the 2nd stacked QMF filter - DECLARE_ALIGNED(32, float, last_qmf_delay)[256+39]; ///< delay line for the last stacked QMF filter -} AT1SUCtx; - -/** - * The atrac1 context, holds all needed parameters for decoding - */ -typedef struct AT1Ctx { - AT1SUCtx SUs[AT1_MAX_CHANNELS]; ///< channel sound unit - DECLARE_ALIGNED(32, float, spec)[AT1_SU_SAMPLES]; ///< the mdct spectrum buffer - - DECLARE_ALIGNED(32, float, low)[256]; - DECLARE_ALIGNED(32, float, mid)[256]; - DECLARE_ALIGNED(32, float, high)[512]; - float* bands[3]; - AVTXContext *mdct_ctx[3]; - av_tx_fn mdct_fn[3]; - void (*vector_fmul_window)(float *dst, const float *src0, - const float *src1, const float *win, int len); -} AT1Ctx; - -/** size of the transform in samples in the long mode for each QMF band */ -static const uint16_t samples_per_band[3] = {128, 128, 256}; -static const uint8_t mdct_long_nbits[3] = {7, 7, 8}; - - -static void at1_imdct(AT1Ctx *q, float *spec, float *out, int nbits, - int rev_spec) -{ - AVTXContext *mdct_context = q->mdct_ctx[nbits - 5 - (nbits > 6)]; - av_tx_fn mdct_fn = q->mdct_fn[nbits - 5 - (nbits > 6)]; - int transf_size = 1 << nbits; - - if (rev_spec) { - int i; - for (i = 0; i < transf_size / 2; i++) - FFSWAP(float, spec[i], spec[transf_size - 1 - i]); - } - mdct_fn(mdct_context, out, spec, sizeof(float)); -} - - -static int at1_imdct_block(AT1SUCtx* su, AT1Ctx *q) -{ - int band_num, band_samples, log2_block_count, nbits, num_blocks, block_size; - unsigned int start_pos, ref_pos = 0, pos = 0; - - for (band_num = 0; band_num < AT1_QMF_BANDS; band_num++) { - float *prev_buf; - int j; - - band_samples = samples_per_band[band_num]; - log2_block_count = su->log2_block_count[band_num]; - - /* number of mdct blocks in the current QMF band: 1 - for long mode */ - /* 4 for short mode(low/middle bands) and 8 for short mode(high band)*/ - num_blocks = 1 << log2_block_count; - - if (num_blocks == 1) { - /* mdct block size in samples: 128 (long mode, low & mid bands), */ - /* 256 (long mode, high band) and 32 (short mode, all bands) */ - block_size = band_samples >> log2_block_count; - - /* calc transform size in bits according to the block_size_mode */ - nbits = mdct_long_nbits[band_num] - log2_block_count; - - if (nbits != 5 && nbits != 7 && nbits != 8) - return AVERROR_INVALIDDATA; - } else { - block_size = 32; - nbits = 5; - } - - start_pos = 0; - prev_buf = &su->spectrum[1][ref_pos + band_samples - 16]; - for (j=0; j < num_blocks; j++) { - at1_imdct(q, &q->spec[pos], &su->spectrum[0][ref_pos + start_pos], nbits, band_num); - - /* overlap and window */ - q->vector_fmul_window(&q->bands[band_num][start_pos], prev_buf, - &su->spectrum[0][ref_pos + start_pos], ff_sine_32, 16); - - prev_buf = &su->spectrum[0][ref_pos+start_pos + 16]; - start_pos += block_size; - pos += block_size; - } - - if (num_blocks == 1) - memcpy(q->bands[band_num] + 32, &su->spectrum[0][ref_pos + 16], 240 * sizeof(float)); - - ref_pos += band_samples; - } - - /* Swap buffers so the mdct overlap works */ - FFSWAP(float*, su->spectrum[0], su->spectrum[1]); - - return 0; -} - -/** - * Parse the block size mode byte - */ - -static int at1_parse_bsm(GetBitContext* gb, int log2_block_cnt[AT1_QMF_BANDS]) -{ - int log2_block_count_tmp, i; - - for (i = 0; i < 2; i++) { - /* low and mid band */ - log2_block_count_tmp = get_bits(gb, 2); - if (log2_block_count_tmp & 1) - return AVERROR_INVALIDDATA; - log2_block_cnt[i] = 2 - log2_block_count_tmp; - } - - /* high band */ - log2_block_count_tmp = get_bits(gb, 2); - if (log2_block_count_tmp != 0 && log2_block_count_tmp != 3) - return AVERROR_INVALIDDATA; - log2_block_cnt[IDX_HIGH_BAND] = 3 - log2_block_count_tmp; - - skip_bits(gb, 2); - return 0; -} - - -static int at1_unpack_dequant(GetBitContext* gb, AT1SUCtx* su, - float spec[AT1_SU_SAMPLES]) -{ - int bits_used, band_num, bfu_num, i; - uint8_t idwls[AT1_MAX_BFU]; ///< the word length indexes for each BFU - uint8_t idsfs[AT1_MAX_BFU]; ///< the scalefactor indexes for each BFU - - /* parse the info byte (2nd byte) telling how much BFUs were coded */ - su->num_bfus = bfu_amount_tab1[get_bits(gb, 3)]; - - /* calc number of consumed bits: - num_BFUs * (idwl(4bits) + idsf(6bits)) + log2_block_count(8bits) + info_byte(8bits) - + info_byte_copy(8bits) + log2_block_count_copy(8bits) */ - bits_used = su->num_bfus * 10 + 32 + - bfu_amount_tab2[get_bits(gb, 2)] + - (bfu_amount_tab3[get_bits(gb, 3)] << 1); - - /* get word length index (idwl) for each BFU */ - for (i = 0; i < su->num_bfus; i++) - idwls[i] = get_bits(gb, 4); - - /* get scalefactor index (idsf) for each BFU */ - for (i = 0; i < su->num_bfus; i++) - idsfs[i] = get_bits(gb, 6); - - /* zero idwl/idsf for empty BFUs */ - for (i = su->num_bfus; i < AT1_MAX_BFU; i++) - idwls[i] = idsfs[i] = 0; - - /* read in the spectral data and reconstruct MDCT spectrum of this channel */ - for (band_num = 0; band_num < AT1_QMF_BANDS; band_num++) { - for (bfu_num = bfu_bands_t[band_num]; bfu_num < bfu_bands_t[band_num+1]; bfu_num++) { - int pos; - - int num_specs = specs_per_bfu[bfu_num]; - int word_len = !!idwls[bfu_num] + idwls[bfu_num]; - float scale_factor = ff_atrac_sf_table[idsfs[bfu_num]]; - bits_used += word_len * num_specs; /* add number of bits consumed by current BFU */ - - /* check for bitstream overflow */ - if (bits_used > AT1_SU_MAX_BITS) - return AVERROR_INVALIDDATA; - - /* get the position of the 1st spec according to the block size mode */ - pos = su->log2_block_count[band_num] ? bfu_start_short[bfu_num] : bfu_start_long[bfu_num]; - - if (word_len) { - float max_quant = 1.0 / (float)((1 << (word_len - 1)) - 1); - - for (i = 0; i < num_specs; i++) { - /* read in a quantized spec and convert it to - * signed int and then inverse quantization - */ - spec[pos+i] = get_sbits(gb, word_len) * scale_factor * max_quant; - } - } else { /* word_len = 0 -> empty BFU, zero all specs in the empty BFU */ - memset(&spec[pos], 0, num_specs * sizeof(float)); - } - } - } - - return 0; -} - - -static void at1_subband_synthesis(AT1Ctx *q, AT1SUCtx* su, float *pOut) -{ - float temp[256]; - float iqmf_temp[512 + 46]; - - /* combine low and middle bands */ - ff_atrac_iqmf(q->bands[0], q->bands[1], 128, temp, su->fst_qmf_delay, iqmf_temp); - - /* delay the signal of the high band by 39 samples */ - memcpy( su->last_qmf_delay, &su->last_qmf_delay[256], sizeof(float) * 39); - memcpy(&su->last_qmf_delay[39], q->bands[2], sizeof(float) * 256); - - /* combine (low + middle) and high bands */ - ff_atrac_iqmf(temp, su->last_qmf_delay, 256, pOut, su->snd_qmf_delay, iqmf_temp); -} - - -static int atrac1_decode_frame(AVCodecContext *avctx, AVFrame *frame, - int *got_frame_ptr, AVPacket *avpkt) -{ - const uint8_t *buf = avpkt->data; - int buf_size = avpkt->size; - AT1Ctx *q = avctx->priv_data; - int channels = avctx->ch_layout.nb_channels; - int ch, ret; - GetBitContext gb; - - - if (buf_size < 212 * channels) { - av_log(avctx, AV_LOG_ERROR, "Not enough data to decode!\n"); - return AVERROR_INVALIDDATA; - } - - /* get output buffer */ - frame->nb_samples = AT1_SU_SAMPLES; - if ((ret = ff_get_buffer(avctx, frame, 0)) < 0) - return ret; - - for (ch = 0; ch < channels; ch++) { - AT1SUCtx* su = &q->SUs[ch]; - - init_get_bits(&gb, &buf[212 * ch], 212 * 8); - - /* parse block_size_mode, 1st byte */ - ret = at1_parse_bsm(&gb, su->log2_block_count); - if (ret < 0) - return ret; - - ret = at1_unpack_dequant(&gb, su, q->spec); - if (ret < 0) - return ret; - - ret = at1_imdct_block(su, q); - if (ret < 0) - return ret; - at1_subband_synthesis(q, su, (float *)frame->extended_data[ch]); - } - - *got_frame_ptr = 1; - - return avctx->block_align; -} - - -static av_cold int atrac1_decode_end(AVCodecContext * avctx) -{ - AT1Ctx *q = avctx->priv_data; - - av_tx_uninit(&q->mdct_ctx[0]); - av_tx_uninit(&q->mdct_ctx[1]); - av_tx_uninit(&q->mdct_ctx[2]); - - return 0; -} - - -static av_cold int atrac1_decode_init(AVCodecContext *avctx) -{ - AT1Ctx *q = avctx->priv_data; - AVFloatDSPContext *fdsp; - int channels = avctx->ch_layout.nb_channels; - float scale = -1.0 / (1 << 15); - int ret; - - avctx->sample_fmt = AV_SAMPLE_FMT_FLTP; - - if (channels < 1 || channels > AT1_MAX_CHANNELS) { - av_log(avctx, AV_LOG_ERROR, "Unsupported number of channels: %d\n", - channels); - return AVERROR(EINVAL); - } - - if (avctx->block_align <= 0) { - av_log(avctx, AV_LOG_ERROR, "Unsupported block align."); - return AVERROR_PATCHWELCOME; - } - - /* Init the mdct transforms */ - if ((ret = av_tx_init(&q->mdct_ctx[0], &q->mdct_fn[0], AV_TX_FLOAT_MDCT, - 1, 32, &scale, 0) < 0)) - return ret; - if ((ret = av_tx_init(&q->mdct_ctx[1], &q->mdct_fn[1], AV_TX_FLOAT_MDCT, - 1, 128, &scale, 0) < 0)) - return ret; - if ((ret = av_tx_init(&q->mdct_ctx[2], &q->mdct_fn[2], AV_TX_FLOAT_MDCT, - 1, 256, &scale, 0) < 0)) - return ret; - - ff_init_ff_sine_windows(5); - - ff_atrac_generate_tables(); - - fdsp = avpriv_float_dsp_alloc(avctx->flags & AV_CODEC_FLAG_BITEXACT); - if (!fdsp) - return AVERROR(ENOMEM); - q->vector_fmul_window = fdsp->vector_fmul_window; - av_free(fdsp); - - q->bands[0] = q->low; - q->bands[1] = q->mid; - q->bands[2] = q->high; - - /* Prepare the mdct overlap buffers */ - q->SUs[0].spectrum[0] = q->SUs[0].spec1; - q->SUs[0].spectrum[1] = q->SUs[0].spec2; - q->SUs[1].spectrum[0] = q->SUs[1].spec1; - q->SUs[1].spectrum[1] = q->SUs[1].spec2; - - return 0; -} - - -const FFCodec ff_atrac1_decoder = { - .p.name = "atrac1", - CODEC_LONG_NAME("ATRAC1 (Adaptive TRansform Acoustic Coding)"), - .p.type = AVMEDIA_TYPE_AUDIO, - .p.id = AV_CODEC_ID_ATRAC1, - .priv_data_size = sizeof(AT1Ctx), - .init = atrac1_decode_init, - .close = atrac1_decode_end, - FF_CODEC_DECODE_CB(atrac1_decode_frame), - .p.capabilities = AV_CODEC_CAP_DR1, - .p.sample_fmts = (const enum AVSampleFormat[]) { AV_SAMPLE_FMT_FLTP, - AV_SAMPLE_FMT_NONE }, - .caps_internal = FF_CODEC_CAP_INIT_CLEANUP, -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/g723_1.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/g723_1.h deleted file mode 100644 index 521f220b2af116a5facf64d013f5922ea1403296..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/g723_1.h +++ /dev/null @@ -1,267 +0,0 @@ -/* - * G.723.1 common header and data tables - * Copyright (c) 2006 Benjamin Larsson - * Copyright (c) 2010 Mohamed Naufal Basheer - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * G.723.1 types, functions and data tables - */ - -#ifndef AVCODEC_G723_1_H -#define AVCODEC_G723_1_H - -#include - -#include "libavutil/log.h" - -#define SUBFRAMES 4 -#define SUBFRAME_LEN 60 -#define FRAME_LEN (SUBFRAME_LEN << 2) -#define HALF_FRAME_LEN (FRAME_LEN / 2) -#define LPC_FRAME (HALF_FRAME_LEN + SUBFRAME_LEN) -#define LPC_ORDER 10 -#define LSP_BANDS 3 -#define LSP_CB_SIZE 256 -#define PITCH_MIN 18 -#define PITCH_MAX (PITCH_MIN + 127) -#define PITCH_ORDER 5 -#define GRID_SIZE 2 -#define PULSE_MAX 6 -#define GAIN_LEVELS 24 -#define COS_TBL_SIZE 512 - -/** - * Bitexact implementation of 2ab scaled by 1/2^16. - * - * @param a 32 bit multiplicand - * @param b 16 bit multiplier - */ -#define MULL2(a, b) \ - ((((a) >> 16) * (b) * 2) + (((a) & 0xffff) * (b) >> 15)) - -/** - * G723.1 frame types - */ -enum FrameType { - ACTIVE_FRAME, ///< Active speech - SID_FRAME, ///< Silence Insertion Descriptor frame - UNTRANSMITTED_FRAME -}; - -/** - * G723.1 rate values - */ -enum Rate { - RATE_6300, - RATE_5300 -}; - -/** - * G723.1 unpacked data subframe - */ -typedef struct G723_1_Subframe { - int ad_cb_lag; ///< adaptive codebook lag - int ad_cb_gain; - int dirac_train; - int pulse_sign; - int grid_index; - int amp_index; - int pulse_pos; -} G723_1_Subframe; - -/** - * Pitch postfilter parameters - */ -typedef struct PPFParam { - int index; ///< postfilter backward/forward lag - int16_t opt_gain; ///< optimal gain - int16_t sc_gain; ///< scaling gain -} PPFParam; - -/** - * Harmonic filter parameters - */ -typedef struct HFParam { - int index; - int gain; -} HFParam; - -/** - * Optimized fixed codebook excitation parameters - */ -typedef struct FCBParam { - int min_err; - int amp_index; - int grid_index; - int dirac_train; - int pulse_pos[PULSE_MAX]; - int pulse_sign[PULSE_MAX]; -} FCBParam; - -typedef struct G723_1_ChannelContext { - G723_1_Subframe subframe[4]; - enum FrameType cur_frame_type; - enum FrameType past_frame_type; - enum Rate cur_rate; - uint8_t lsp_index[LSP_BANDS]; - int pitch_lag[2]; - int erased_frames; - - int16_t prev_lsp[LPC_ORDER]; - int16_t sid_lsp[LPC_ORDER]; - int16_t prev_excitation[PITCH_MAX]; - int16_t excitation[PITCH_MAX + FRAME_LEN + 4]; - int16_t synth_mem[LPC_ORDER]; - int16_t fir_mem[LPC_ORDER]; - int iir_mem[LPC_ORDER]; - - int random_seed; - int cng_random_seed; - int interp_index; - int interp_gain; - int sid_gain; - int cur_gain; - int reflection_coef; - int pf_gain; ///< formant postfilter - ///< gain scaling unit memory - int16_t audio[FRAME_LEN + LPC_ORDER + PITCH_MAX + 4]; - - /* encoder */ - int16_t prev_data[HALF_FRAME_LEN]; - int16_t prev_weight_sig[PITCH_MAX]; - - int16_t hpf_fir_mem; ///< highpass filter fir - int hpf_iir_mem; ///< and iir memories - int16_t perf_fir_mem[LPC_ORDER]; ///< perceptual filter fir - int16_t perf_iir_mem[LPC_ORDER]; ///< and iir memories - - int16_t harmonic_mem[PITCH_MAX]; -} G723_1_ChannelContext; - -typedef struct G723_1_Context { - AVClass *class; - int postfilter; - - G723_1_ChannelContext ch[2]; -} G723_1_Context; - - -/** - * Scale vector contents based on the largest of their absolutes. - */ -int ff_g723_1_scale_vector(int16_t *dst, const int16_t *vector, int length); - -/** - * Calculate the number of left-shifts required for normalizing the input. - * - * @param num input number - * @param width width of the input, 16 bits(0) / 32 bits(1) - */ -int ff_g723_1_normalize_bits(int num, int width); - -int ff_g723_1_dot_product(const int16_t *a, const int16_t *b, int length); - -/** - * Get delayed contribution from the previous excitation vector. - */ -void ff_g723_1_get_residual(int16_t *residual, int16_t *prev_excitation, - int lag); - -/** - * Generate a train of dirac functions with period as pitch lag. - */ -void ff_g723_1_gen_dirac_train(int16_t *buf, int pitch_lag); - - -/** - * Generate adaptive codebook excitation. - */ -void ff_g723_1_gen_acb_excitation(int16_t *vector, int16_t *prev_excitation, - int pitch_lag, G723_1_Subframe *subfrm, - enum Rate cur_rate); -/** - * Quantize LSP frequencies by interpolation and convert them to - * the corresponding LPC coefficients. - * - * @param lpc buffer for LPC coefficients - * @param cur_lsp the current LSP vector - * @param prev_lsp the previous LSP vector - */ -void ff_g723_1_lsp_interpolate(int16_t *lpc, int16_t *cur_lsp, - int16_t *prev_lsp); - -/** - * Perform inverse quantization of LSP frequencies. - * - * @param cur_lsp the current LSP vector - * @param prev_lsp the previous LSP vector - * @param lsp_index VQ indices - * @param bad_frame bad frame flag - */ -void ff_g723_1_inverse_quant(int16_t *cur_lsp, int16_t *prev_lsp, - uint8_t *lsp_index, int bad_frame); - -static const uint8_t frame_size[4] = { 24, 20, 4, 1 }; - -/** - * LSP DC component - */ -static const int16_t dc_lsp[LPC_ORDER] = { - 0x0c3b, - 0x1271, - 0x1e0a, - 0x2a36, - 0x3630, - 0x406f, - 0x4d28, - 0x56f4, - 0x638c, - 0x6c46 -}; - -/* Cosine table scaled by 2^14 */ -extern const int16_t ff_g723_1_cos_tab[COS_TBL_SIZE + 1]; -#define G723_1_COS_TAB_FIRST_ELEMENT 16384 - -/** - * LSP VQ tables - */ -extern const int16_t ff_g723_1_lsp_band0[LSP_CB_SIZE][3]; -extern const int16_t ff_g723_1_lsp_band1[LSP_CB_SIZE][3]; -extern const int16_t ff_g723_1_lsp_band2[LSP_CB_SIZE][4]; - -/** - * Used for the coding/decoding of the pulses positions - * for the MP-MLQ codebook - */ -extern const int32_t ff_g723_1_combinatorial_table[PULSE_MAX][SUBFRAME_LEN/GRID_SIZE]; - -/** - * Number of non-zero pulses in the MP-MLQ excitation - */ -static const int8_t pulses[4] = {6, 5, 6, 5}; - -extern const int16_t ff_g723_1_fixed_cb_gain[GAIN_LEVELS]; - -extern const int16_t ff_g723_1_adaptive_cb_gain85 [ 85 * 20]; -extern const int16_t ff_g723_1_adaptive_cb_gain170[170 * 20]; - -#endif /* AVCODEC_G723_1_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mdct_template.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mdct_template.c deleted file mode 100644 index a854ad27008cf3de8ec6fc553f063a8c3d33a8f6..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mdct_template.c +++ /dev/null @@ -1,209 +0,0 @@ -/* - * MDCT/IMDCT transforms - * Copyright (c) 2002 Fabrice Bellard - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include -#include -#include "libavutil/common.h" -#include "libavutil/libm.h" -#include "libavutil/mathematics.h" -#include "fft.h" -#include "fft-internal.h" - -/** - * @file - * MDCT/IMDCT transforms. - */ - -#if FFT_FLOAT -# define RSCALE(x, y) ((x) + (y)) -#else -# define RSCALE(x, y) ((int)((x) + (unsigned)(y) + 32) >> 6) -#endif - -/** - * init MDCT or IMDCT computation. - */ -av_cold int ff_mdct_init(FFTContext *s, int nbits, int inverse, double scale) -{ - int n, n4, i; - double alpha, theta; - int tstep; - - memset(s, 0, sizeof(*s)); - n = 1 << nbits; - s->mdct_bits = nbits; - s->mdct_size = n; - n4 = n >> 2; - s->mdct_permutation = FF_MDCT_PERM_NONE; - - if (ff_fft_init(s, s->mdct_bits - 2, inverse) < 0) - goto fail; - - s->tcos = av_malloc_array(n/2, sizeof(FFTSample)); - if (!s->tcos) - goto fail; - - switch (s->mdct_permutation) { - case FF_MDCT_PERM_NONE: - s->tsin = s->tcos + n4; - tstep = 1; - break; - case FF_MDCT_PERM_INTERLEAVE: - s->tsin = s->tcos + 1; - tstep = 2; - break; - default: - goto fail; - } - - theta = 1.0 / 8.0 + (scale < 0 ? n4 : 0); - scale = sqrt(fabs(scale)); - for(i=0;itcos[i*tstep] = lrint(-cos(alpha) * 2147483648.0); - s->tsin[i*tstep] = lrint(-sin(alpha) * 2147483648.0); -#else - s->tcos[i*tstep] = FIX15(-cos(alpha) * scale); - s->tsin[i*tstep] = FIX15(-sin(alpha) * scale); -#endif - } - return 0; - fail: - ff_mdct_end(s); - return -1; -} - -/** - * Compute the middle half of the inverse MDCT of size N = 2^nbits, - * thus excluding the parts that can be derived by symmetry - * @param output N/2 samples - * @param input N/2 samples - */ -void ff_imdct_half_c(FFTContext *s, FFTSample *output, const FFTSample *input) -{ - int k, n8, n4, n2, n, j; - const uint16_t *revtab = s->revtab; - const FFTSample *tcos = s->tcos; - const FFTSample *tsin = s->tsin; - const FFTSample *in1, *in2; - FFTComplex *z = (FFTComplex *)output; - - n = 1 << s->mdct_bits; - n2 = n >> 1; - n4 = n >> 2; - n8 = n >> 3; - - /* pre rotation */ - in1 = input; - in2 = input + n2 - 1; - for(k = 0; k < n4; k++) { - j=revtab[k]; - CMUL(z[j].re, z[j].im, *in2, *in1, tcos[k], tsin[k]); - in1 += 2; - in2 -= 2; - } - s->fft_calc(s, z); - - /* post rotation + reordering */ - for(k = 0; k < n8; k++) { - FFTSample r0, i0, r1, i1; - CMUL(r0, i1, z[n8-k-1].im, z[n8-k-1].re, tsin[n8-k-1], tcos[n8-k-1]); - CMUL(r1, i0, z[n8+k ].im, z[n8+k ].re, tsin[n8+k ], tcos[n8+k ]); - z[n8-k-1].re = r0; - z[n8-k-1].im = i0; - z[n8+k ].re = r1; - z[n8+k ].im = i1; - } -} - -/** - * Compute inverse MDCT of size N = 2^nbits - * @param output N samples - * @param input N/2 samples - */ -void ff_imdct_calc_c(FFTContext *s, FFTSample *output, const FFTSample *input) -{ - int k; - int n = 1 << s->mdct_bits; - int n2 = n >> 1; - int n4 = n >> 2; - - ff_imdct_half_c(s, output+n4, input); - - for(k = 0; k < n4; k++) { - output[k] = -output[n2-k-1]; - output[n-k-1] = output[n2+k]; - } -} - -/** - * Compute MDCT of size N = 2^nbits - * @param input N samples - * @param out N/2 samples - */ -void ff_mdct_calc_c(FFTContext *s, FFTSample *out, const FFTSample *input) -{ - int i, j, n, n8, n4, n2, n3; - FFTDouble re, im; - const uint16_t *revtab = s->revtab; - const FFTSample *tcos = s->tcos; - const FFTSample *tsin = s->tsin; - FFTComplex *x = (FFTComplex *)out; - - n = 1 << s->mdct_bits; - n2 = n >> 1; - n4 = n >> 2; - n8 = n >> 3; - n3 = 3 * n4; - - /* pre rotation */ - for(i=0;ifft_calc(s, x); - - /* post rotation */ - for(i=0;itcos); - ff_fft_end(s); -} diff --git a/spaces/congsaPfin/Manga-OCR/logs/Beauty Editor How to Use BeautyPlus to Create Stunning Photos and Videos.md b/spaces/congsaPfin/Manga-OCR/logs/Beauty Editor How to Use BeautyPlus to Create Stunning Photos and Videos.md deleted file mode 100644 index 713aebbbbe829e5e3eaa87738f459e751da24d37..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Beauty Editor How to Use BeautyPlus to Create Stunning Photos and Videos.md +++ /dev/null @@ -1,154 +0,0 @@ - -

      What is a beauty editor and how to become one?

      -

      If you love writing about makeup, skincare, haircare, and all things beauty, you might be interested in becoming a beauty editor. A beauty editor is someone who creates, edits, and curates content related to the beauty industry for magazines, websites, blogs, social media, or other platforms. In this article, we will explain what a beauty editor does, what skills and qualifications you need to be one, and how to get started in this exciting and creative career. We will also share some tips and tricks for successful beauty editing that will help you stand out from the crowd.

      -

      beauty editor


      Download File ---> https://urlca.com/2uO83o



      -

      Introduction

      -

      Beauty editors are responsible for producing engaging and informative content that covers various topics in the beauty niche, such as product reviews, tutorials, trends, news, interviews, features, and more. They may work as staff writers or editors for established publications or websites, or as freelancers or bloggers who create their own content and platforms. Depending on their role and experience level, they may also be involved in planning, assigning, editing, proofreading, and publishing content, as well as managing budgets, deadlines, teams, and collaborations.

      -

      What does a beauty editor do?

      -

      A typical day in the life of a beauty editor may include some or all of the following tasks:

      -
        -
      • Researching and pitching ideas for new content
      • -
      • Writing articles, blog posts, newsletters, captions, headlines, etc.
      • -
      • Editing and proofreading content written by themselves or others
      • -
      • Fact-checking and sourcing information from reliable sources
      • -
      • Selecting and editing images, videos, graphics, or other visual elements
      • -
      • Uploading and formatting content using CMS or other tools
      • -
      • Optimizing content for SEO using keywords, meta tags, links, etc.
      • -
      • Monitoring and analyzing content performance using analytics tools
      • -
      • Staying updated on the latest trends and products in the beauty industry
      • -
      • Attending events, launches, shows, or meetings related to the beauty industry
      • -
      • Networking with other beauty editors, influencers, brands, PRs, etc.
      • -
      • Managing social media accounts or other online platforms
      • -
      -

      What skills and qualifications do you need to be a beauty editor?

      -

      To become a successful beauty editor, you need to have a combination of hard skills and soft skills that will help you create high-quality content that attracts and retains your audience. Some of the most important skills and qualifications are:

      -

      How to become a beauty editor in 2023
      -Best beauty editor tips and tricks
      -Beauty editor salary and job description
      -What does a beauty editor do on a daily basis
      -Beauty editor essentials: products and tools
      -Beauty editor reviews: latest trends and products
      -Beauty editor interview questions and answers
      -Beauty editor career path and opportunities
      -Beauty editor portfolio: how to create one
      -Beauty editor courses and certifications
      -Beauty editor vs beauty writer: what's the difference
      -Beauty editor challenges and solutions
      -How to pitch to a beauty editor as a freelancer
      -Beauty editor resume and cover letter examples
      -Beauty editor skills and qualifications
      -Beauty editor goals and objectives
      -Beauty editor mistakes and how to avoid them
      -Beauty editor testimonials and feedback
      -Beauty editor influencers and role models
      -Beauty editor networking and events
      -How to collaborate with a beauty editor as a brand
      -Beauty editor awards and recognition
      -Beauty editor ethics and standards
      -Beauty editor trends and predictions for 2024
      -Beauty editor resources and tools
      -How to find a beauty editor mentor or coach
      -Beauty editor podcasts and blogs to follow
      -Beauty editor books and magazines to read
      -Beauty editor YouTube channels and videos to watch
      -Beauty editor Instagram accounts and stories to follow
      -How to apply for a beauty editor internship or job
      -Beauty editor work from home tips and advice
      -Beauty editor travel and lifestyle tips
      -Beauty editor outfits and style inspiration
      -Beauty editor wellness and self-care tips
      -How to deal with beauty editor burnout and stress
      -Beauty editor hobbies and interests outside work
      -Beauty editor personal stories and experiences
      -How to start a beauty editor blog or website
      -How to monetize a beauty editor blog or website

      -
        -
      • A passion for beauty and a keen eye for detail
      • -
      • Excellent writing and editing skills in your chosen language
      • -
      • A good understanding of grammar, spelling, punctuation, and style
      • -
      • A creative flair and a unique voice as a writer
      • -
      • A strong knowledge of SEO best practices and tools
      • -
      • A familiarity with CMS or other content creation tools
      • -
      • A basic knowledge of HTML or other web languages
      • -
      • An ability to work with images, videos, graphics, or other visual elements
      • -
      • An ability to research and fact-check information from credible sources
      • -
      • An ability to work under pressure and meet deadlines
      • -
      • An ability to work independently or as part of. a team
      • -
      • A good communication and interpersonal skills
      • -
      • A willingness to learn and improve your skills
      • -
      -

      In terms of qualifications, there is no specific degree or certification required to be a beauty editor, but having a background in journalism, communications, English, or related fields can be helpful. You may also want to take courses or workshops on beauty writing, editing, SEO, or other relevant topics to enhance your knowledge and skills. Additionally, having a portfolio of your previous work or a personal blog or website can showcase your talent and experience as a beauty editor.

      -

      How to get started as a beauty editor?

      -

      If you are interested in becoming a beauty editor, here are some steps you can take to get started:

      -
        -
      1. Find your niche and target audience. Decide what kind of beauty content you want to create and who you want to reach. For example, you may want to focus on natural or organic beauty, luxury or budget beauty, skincare or makeup, etc. You may also want to consider the age, gender, location, interests, and preferences of your potential readers.
      2. -
      3. Create your own platform and content. Start a blog, website, social media account, or other online platform where you can publish your own beauty content. This will help you build your portfolio, showcase your skills, and attract followers. Make sure your content is original, engaging, informative, and SEO-friendly.
      4. -
      5. Network and collaborate with others. Connect with other beauty editors, influencers, brands, PRs, etc. who share your niche and audience. You can follow them on social media, comment on their posts, join online communities or forums, attend events or webinars, etc. You can also pitch them ideas for guest posts, collaborations, partnerships, etc.
      6. -
      7. Apply for jobs or freelance gigs. Look for opportunities to work as a beauty editor for established publications or websites that match your niche and style. You can use online job boards, platforms, or networks that specialize in beauty writing or editing. You can also reach out to editors or managers directly with your resume and portfolio.
      8. -
      -

      Tips and tricks for successful beauty editing

      -

      Being a beauty editor can be fun and rewarding, but it also requires hard work and dedication. Here are some tips and tricks that can help you succeed as a beauty editor:

      -

      Stay updated on the latest trends and products in the beauty industry

      -

      As a beauty editor, you need to be on top of the latest news and developments in the beauty industry. You need to know what's new, what's hot, what's not, and what's coming next. You need to be aware of the current and emerging trends and products in the market. You need to be able to provide your readers with fresh and relevant content that reflects their needs and interests.

      -

      One way to stay updated is to follow reputable sources of information in the beauty industry, such as magazines, websites, blogs, podcasts, newsletters, etc. You can also subscribe to beauty boxes, samples, or newsletters that send you new or curated products to try. Another way to stay updated is to test and review new or popular products yourself and share your honest opinions and feedback with your readers.

      -

      Develop your own voice and style as a beauty editor

      -

      As a beauty editor, you need to have a distinctive voice and style that sets you apart from other writers and editors in the same niche. You need to be able to express your personality, opinions, and expertise in a way that resonates with your audience. You need to be able to write in a conversational, informal, and engaging tone that makes your readers feel like they are talking to a friend or a trusted advisor.

      -

      One way to develop your voice and style is to read and analyze the work of other beauty editors that you admire or aspire to be like. You can learn from their techniques, strategies, and choices of words, sentences, and paragraphs. You can also practice writing different types of content, such as reviews, tutorials, features, etc. and experiment with different tones, formats, and angles. Another way to develop your voice and style is to get feedback from your readers, peers, or mentors on your work and improve it based on their suggestions.

      -

      Network with other beauty editors, influencers, and brands

      -

      As a beauty editor, you need to have a strong network of contacts and connections in the beauty industry that can help you grow your career and reputation. You need to be able to communicate and collaborate with other beauty editors, influencers, brands, PRs, etc. who can provide you with information, insights, opportunities, or support. You need to be able to build and maintain professional relationships that are mutually beneficial and respectful.

      -

      One way to network is to join online or offline communities or groups that are related to the beauty industry or your niche. You can participate in discussions, share your work, ask questions, offer advice, etc. You can also attend events, webinars, workshops, or conferences that are relevant to your field or interest. Another way to network is to reach out to people who you admire or want to work with via email, social media, or other platforms. You can introduce yourself, compliment their work, express your interest, propose an idea, etc.

      -

      Use SEO best practices to optimize your content for search engines

      -

      As a beauty editor, you need to have a good understanding of SEO (search engine optimization) best practices and tools that can help you increase the visibility and ranking of your content on search engines like Google or Bing. You need to be able to create content that is not only appealing and useful for your readers but also relevant and optimized for the keywords and queries that they use to search for beauty-related information online.

      -

      One way to use SEO best practices is to conduct keyword research using tools like Google Keyword Planner, Moz, SEMrush, etc. to find out what keywords or phrases your target audience is using to search for beauty-related content online. You can then use these keywords or phrases in your content, such as in your title, headings, subheadings, introduction, conclusion, body, meta tags, etc. Another way to use SEO best practices is to create high-quality content that is original, engaging, informative, and useful for your readers. You can also use images, videos, graphics, or other visual elements to enhance your content and make it more appealing. You can also link to other relevant and authoritative sources or websites that can provide more information or value to your readers.

      -

      Conclusion

      -

      Being a beauty editor can be a rewarding and fulfilling career for anyone who loves writing and beauty. However, it also requires a lot of skills, qualifications, and hard work to succeed in this competitive and dynamic field. In this article, we have explained what a beauty editor does, what skills and qualifications you need to be one, and how to get started in this career. We have also shared some tips and tricks for successful beauty editing that can help you create high-quality content that attracts and retains your audience.

      -

      If you are interested in becoming a beauty editor, we hope this article has given you some useful information and guidance. We encourage you to follow your passion and pursue your dream of becoming a beauty editor. Remember, the beauty industry is always evolving and growing, so there are always new opportunities and challenges waiting for you. Good luck!

      -

      FAQs

      -

      How much does a beauty editor earn?

      -

      The salary of a beauty editor may vary depending on various factors, such as the type of publication or website they work for, their level of experience and expertise, their location, etc. According to ZipRecruiter, the average annual salary of a beauty editor in the US is $51,509 as of June 2021. However, this may range from $19,000 to $115,000 depending on the factors mentioned above.

      -

      What are some of the best beauty magazines and websites to work for?

      -

      There are many beauty magazines and websites that offer opportunities for beauty editors to work for. Some of the most popular and reputable ones are:

      -
        -
      • Allure: A leading beauty magazine and website that covers everything from makeup and skincare to haircare and wellness.
      • -
      • Glamour: A fashion and beauty magazine and website that features trends, tips, reviews, celebrity news, and more.
      • -
      • InStyle: A style and beauty magazine and website that showcases the latest products, trends, tips, and advice from experts and celebrities.
      • -
      • Elle: A fashion and beauty magazine and website that covers the latest news, trends, products, and tips from the world of beauty.
      • -
      • Byrdie: A beauty website that offers in-depth reviews, tutorials, features, and guides on everything beauty-related.
      • -
      -

      How can I improve my writing skills as a beauty editor?

      -

      Writing skills are essential for any beauty editor, as they determine the quality and effectiveness of your content. To improve your writing skills as a beauty editor, you can:

      -
        -
      • Read a lot of beauty content from different sources and genres and analyze how they are written, structured, and formatted.
      • -
      • Practice writing different types of beauty content, such as reviews, tutorials, features, etc. and get feedback from your readers, peers, or mentors.
      • -
      • Use online tools or apps that can help you check and improve your grammar, spelling, punctuation, style, etc.
      • -
      • Take online courses or workshops on beauty writing, editing, SEO, or other relevant topics that can enhance your knowledge and skills.
      • -
      • Keep a journal or a notebook where you can jot down your ideas, thoughts, opinions, or experiences related to beauty.
      • -
      -

      What are some of the challenges and opportunities of being a beauty editor?

      -

      Being a beauty editor can be both challenging and rewarding, depending on how you look at it. Some of the common challenges of being a beauty editor are:

      -
        -
      • Keeping up with the fast-paced and ever-changing nature of the beauty industry
      • -
      • Dealing with tight deadlines and high expectations
      • -
      • Handling criticism or negative feedback from your readers or clients
      • -
      • Balancing your creativity and originality with the demands and preferences of your audience or market
      • -
      • Maintaining your credibility and integrity as a beauty editor
      • -
      -

      Some of the common opportunities of being a beauty editor are:

      -
        -
      • Exploring your passion and interest in beauty and sharing it with others
      • -
      • Expressing your personality and voice as a writer and editor
      • -
      • Leveraging your skills and experience to advance your career and reputation
      • -
      • Learning new things and improving your knowledge and skills in the beauty industry
      • -
      • Connecting and collaborating with other beauty editors, influencers, brands, PRs, etc.
      • -
      -

      How can I find beauty editing jobs or freelance gigs?

      -

      If you are looking for beauty editing jobs or freelance gigs, you can use various online platforms or networks that specialize in connecting writers and editors with clients or employers in the beauty industry. Some of the popular ones are:

      -
        -
      • Beautylish: A platform that connects beauty editors with brands and publications that need content creation services.
      • -
      • Cosmopolitan: A magazine that offers opportunities for freelance beauty writers and editors to contribute to their online or print editions.
      • -
      • Glossy: A website that covers the business of beauty and offers opportunities for freelance writers and editors to pitch stories or features.
      • -
      • The Write Life: A website that helps writers make a living from their craft and offers a list of websites that pay writers for beauty-related content.
      • -
      • Upwork: A platform that connects freelancers with clients who need various services, including beauty writing and editing.
      • -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Gratis Suara Lovebird Ngekek Panjang Mp3 Kualitas Terbaik untuk Masteran dan Simulasi Lomba.md b/spaces/congsaPfin/Manga-OCR/logs/Download Gratis Suara Lovebird Ngekek Panjang Mp3 Kualitas Terbaik untuk Masteran dan Simulasi Lomba.md deleted file mode 100644 index 349019015021321e80bdda4c673c8698ba642bc7..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Gratis Suara Lovebird Ngekek Panjang Mp3 Kualitas Terbaik untuk Masteran dan Simulasi Lomba.md +++ /dev/null @@ -1,167 +0,0 @@ - -

      Download Lovebird Ngekek Panjang: A Guide for Lovebird Lovers

      -

      If you are a fan of lovebirds, you might have heard of the term "lovebird ngekek panjang". This is a special type of lovebird that can produce a long and continuous chirping sound, which is highly sought after by many bird enthusiasts. In this article, we will explain what lovebird ngekek panjang is, how to train your lovebird to ngekek panjang, and how to download lovebird ngekek panjang MP3 for your masteran.

      -

      download lovebird ngekek panjang


      Download File ——— https://urlca.com/2uO932



      -

      What is Lovebird Ngekek Panjang?

      -

      Lovebird ngekek panjang is a term that refers to a lovebird that can chirp or sing for a long duration, usually more than one minute. The word "ngekek" means "chirping" in Indonesian, while "panjang" means "long". This type of lovebird is very popular among bird lovers because of its beautiful and melodious voice, as well as its attractive appearance.

      -

      The meaning and benefits of lovebird ngekek panjang

      -

      Lovebird ngekek panjang is not only a pleasure to listen to, but also has some meanings and benefits for the owner and the bird itself. According to some sources , lovebird ngekek panjang can indicate the following things:

      -
        -
      • The lovebird is healthy, happy, and comfortable in its environment.
      • -
      • The lovebird is confident, dominant, and ready to compete with other birds.
      • -
      • The lovebird is in a good mood, especially when it is mating season or when it sees its mate.
      • -
      • The lovebird is expressing its emotions, such as joy, excitement, curiosity, or anger.
      • -
      -

      Some of the benefits of having a lovebird ngekek panjang are:

      -
        -
      • It can be a source of entertainment, relaxation, and stress relief for the owner.
      • -
      • It can be a potential winner in bird competitions or contests, which can bring fame and fortune for the owner.
      • -
      • It can be a valuable asset for breeding or selling purposes, as it can produce high-quality offspring or fetch a high price in the market.
      • -
      • It can be a loyal companion and friend for the owner, as it can bond well with humans and other birds.
      • -
      -

      The characteristics and types of lovebird ngekek panjang

      -

      Lovebird ngekek panjang has some distinctive characteristics that set it apart from other types of lovebirds. Some of these characteristics are :

      -

      download suara lovebird ngekek panjang mp3 untuk masteran
      -download suara lovebird kusumo ngekek panjang
      -download suara lovebird juara konslet
      -download suara lovebird ngetik versi 2
      -download suara lovebird ramai dan simulasi lomba
      -download suara lovebird konslet durasi panjang
      -download suara lovebird fighter dan konslet
      -download suara lovebird zupiter
      -download suara lovebird melinda
      -download suara lovebird salju
      -download suara lovebird putri mentaya
      -download suara lovebird fretty
      -download suara lovebird dora
      -download suara lovebird tunggal
      -download suara lovebird ngekek tanpa putus
      -download mp3 lovebird ngekek panjang gratis
      -download mp3 lovebird kusumo ngekek panjang gratis
      -download mp3 lovebird juara konslet gratis
      -download mp3 lovebird ngetik versi 2 gratis
      -download mp3 lovebird ramai dan simulasi lomba gratis
      -download mp3 lovebird konslet durasi panjang gratis
      -download mp3 lovebird fighter dan konslet gratis
      -download mp3 lovebird zupiter gratis
      -download mp3 lovebird melinda gratis
      -download mp3 lovebird salju gratis
      -download mp3 lovebird putri mentaya gratis
      -download mp3 lovebird fretty gratis
      -download mp3 lovebird dora gratis
      -download mp3 lovebird tunggal gratis
      -download mp3 lovebird ngekek tanpa putus gratis
      -cara memaster burung lovebird dengan suara ngekek panjang
      -cara merawat burung lovebird kusumo agar ngekek panjang
      -cara jebol birahi burung lovebird jantan agar ngekek konslet
      -cara membuat burung lovebird ngekek mangap dengan durasi panjang
      -cara memilih burung lovebird fighter dan konslet yang bagus
      -ciri-ciri burung lovebird fighter bermental petarung yang kuat
      -tips merawat burung lovebird agar sehat dan rajin bunyi
      -tips melatih burung lovebird agar siap ikut lomba dan menang juara
      -tips mengatasi burung lovebird yang macet bunyi atau stres
      -tips menangani burung lovebird yang over birahi atau mbagong

      -
        -
      • It has a loud, clear, and varied voice that can range from soft to sharp tones.
      • -
      • It has a long and consistent chirping pattern that can last for more than one minute without stopping or pausing.
      • -
      • It has a strong and stable stamina that allows it to chirp for a long time without getting tired or losing its voice.
      • -
      • It has a high level of intelligence and adaptability that enables it to learn new sounds and imitate other birds or noises.
      • -
      -

      There are two main types of lovebird ngekek panjang that are commonly recognized by bird lovers. They are:

      -
        -
      • Lovebird fighter: This type of lovebird ngekek panjang is known for its aggressive and competitive nature. It can chirp loudly and continuously to challenge or intimidate other birds. It can also fight with other birds if provoked or threatened. This type of lovebird is suitable for owners who like to participate in bird competitions or contests, as it can show its dominance and superiority over other birds.
      • -
      • Lovebird gacor: This type of lovebird ngekek panjang is known for its cheerful and lively nature. It can chirp happily and continuously to express its emotions or communicate with other birds. It can also interact well with humans and other birds, as it is friendly and sociable. This type of lovebird is suitable for owners who like to keep birds as pets or companions, as it can provide entertainment and affection for the owner.
      • -
      -

      How to Train Lovebird to Ngekek Panjang?

      -

      Training lovebird to ngekek panjang is not an easy task, as it requires patience, dedication, and consistency from the owner. However, it is not impossible, as there are some methods and tips that can help the owner to achieve this goal. Here are some of them:

      -

      The importance of proper diet and nutrition

      -

      One of the most important factors that affect the quality and quantity of lovebird ngekek panjang is the diet and nutrition of the bird. A healthy and balanced diet can provide the bird with enough energy, stamina, and nutrients to support its vocal performance. Some of the foods that are recommended for lovebird ngekek panjang are :

      -
        -
      • Fresh fruits and vegetables, such as apples, bananas, carrots, broccoli, spinach, etc.
      • -
      • Seeds and grains, such as millet, sunflower seeds, oats, wheat, etc.
      • -
      • Pellets or formulated feeds that are specially designed for lovebirds.
      • -
      • Protein sources, such as eggs, cheese, yogurt, insects, etc.
      • -
      • Water that is clean and fresh.
      • -
      -

      Some of the foods that should be avoided or limited for lovebird ngekek panjang are:

      -
        -
      • Salty, spicy, or sugary foods, such as chips, crackers, candy, etc.
      • -
      • Chocolate, caffeine, alcohol, or other substances that can be toxic or harmful for birds.
      • -
      • Avocado, onion, garlic, or other foods that can cause digestive problems or allergic reactions for birds.
      • -
      -

      The best methods and tips for lovebird training

      -

      Besides providing a proper diet and nutrition, the owner also needs to use some effective methods and tips to train the lovebird to ngekek panjang. Some of these methods and tips are :

      -
        -
      • Start the training early. The best time to train a lovebird to ngekek panjang is when it is still young, preferably between 3 to 6 months old. This is because young birds are more receptive and adaptable to learning new sounds and skills than older birds.
      • -
      • Create a comfortable and stimulating environment. The owner should make sure that the cage or enclosure of the lovebird is spacious, clean, and well-ventilated. The owner should also provide some toys, perches, swings, or other accessories that can keep the bird entertained and active. The owner should also avoid any sources of stress or disturbance that can affect the mood or health of the bird.
      • -
      • Use positive reinforcement. The owner should reward the bird with praise, treats, or attention whenever it chirps or sings well. The owner should also avoid punishing or scolding the bird whenever it makes mistakes or stops chirping. The owner should use a gentle and friendly tone when communicating with the bird.
      • -
      • Use a masteran or a mentor bird. A masteran is a recording of a lovebird ngekek panjang that can be played to the bird as a reference or a model. A mentor bird is another lovebird ngekek panjang that can be placed near the bird as a companion or a competitor. The owner should choose a masteran or a mentor bird that has a similar voice type and quality as the desired outcome.
      • -
      • Be consistent and patient. The owner should train the bird regularly and frequently, preferably every day for at least 15 minutes per session. The owner should also be patient and realistic with the progress and results of the training. The owner should not expect immediate or miraculous results from the training.
      • -
      -

      The recommended multivitamins and supplements for lovebird

      -

      In addition to providing a proper diet and nutrition and using some effective methods and tips for training, the owner can also give some multivitamins and supplements to the lovebird to enhance its vocal performance and overall health. Some of the multivitamins and supplements that are recommended for lovebird ngekek panjang are :

        -
      • Vitamin A: This vitamin is essential for the health of the eyes, skin, feathers, and respiratory system of the bird. It can also prevent infections and diseases that can affect the voice of the bird. Some sources of vitamin A are carrots, sweet potatoes, spinach, kale, etc.
      • -
      • Vitamin B: This vitamin is important for the metabolism, digestion, and nervous system of the bird. It can also improve the mood, energy, and appetite of the bird. Some sources of vitamin B are eggs, cheese, yogurt, beans, nuts, etc.
      • -
      • Vitamin C: This vitamin is vital for the immune system, blood circulation, and wound healing of the bird. It can also protect the bird from stress and oxidative damage that can impair its voice. Some sources of vitamin C are oranges, strawberries, kiwis, peppers, etc.
      • -
      • Vitamin D: This vitamin is crucial for the bone, muscle, and egg development of the bird. It can also regulate the calcium and phosphorus levels in the body of the bird. Some sources of vitamin D are sunlight, fish oil, cod liver oil, etc.
      • -
      • Vitamin E: This vitamin is beneficial for the reproductive system, skin, and feathers of the bird. It can also prevent inflammation and infection that can affect the voice of the bird. Some sources of vitamin E are wheat germ oil, sunflower seeds, almonds, etc.
      • -
      • Vitamin K: This vitamin is necessary for the blood clotting and bone formation of the bird. It can also prevent bleeding and bruising that can affect the voice of the bird. Some sources of vitamin K are green leafy vegetables, broccoli, cabbage, etc.
      • -
      • Calcium: This mineral is essential for the bone, muscle, and egg development of the bird. It can also support the nerve and muscle function of the bird. Some sources of calcium are cuttlebone, eggshell, oyster shell, etc.
      • -
      • Iron: This mineral is important for the red blood cell production and oxygen transport of the bird. It can also prevent anemia and weakness that can affect the voice of the bird. Some sources of iron are liver, meat, spinach, raisins, etc.
      • -
      • Zinc: This mineral is beneficial for the immune system, wound healing, and feather growth of the bird. It can also prevent infection and disease that can affect the voice of the bird. Some sources of zinc are pumpkin seeds, sesame seeds, oats, etc.
      • -
      -

      The owner should consult a veterinarian or an expert before giving any multivitamins or supplements to the lovebird to ensure that they are safe and suitable for the bird.

      -

      How to Download Lovebird Ngekek Panjang MP3?

      -

      Another way to train lovebird to ngekek panjang is to use MP3 as a masteran or a reference. MP3 is a digital audio format that can store and play sound files on various devices. By using MP3 as a masteran, the owner can expose the lovebird to various sounds and voices of lovebird ngekek panjang that can stimulate its learning and imitation abilities.

      -

      The advantages and disadvantages of using MP3 as a masteran

      -

      Using MP3 as a masteran has some advantages and disadvantages that the owner should be aware of before deciding to use it. Some of these advantages and disadvantages are:

      - - - - - - - - - - - - - - - - - -
      AdvantagesDisadvantages
      - It is easy and convenient to use. The owner can download or stream MP3 files from various sources and websites on their devices such as smartphones, tablets, laptops, etc.- It can be boring and monotonous for the bird. The owner should vary or change the MP3 files regularly to avoid making the bird lose interest or motivation to chirp.
      - It is cheap and affordable. The owner can download or stream MP3 files for free or for a low cost from various sources and websites on the internet.- It can be unreliable and inconsistent. The owner should check the quality and accuracy of the MP3 files before using them, as some of them may be corrupted, distorted, or inaccurate.
      - It is diverse and varied. The owner can choose from a wide range of MP3 files that feature different types, styles, and qualities of lovebird ngekek panjang.- It can be confusing and overwhelming for the bird. The owner should limit or filter the MP3 files that they use, as some of them may be too complex, too simple, or too different from the desired outcome.
      -

      The owner should weigh the pros and cons of using MP3 as a masteran and decide whether it is suitable and effective for their lovebird or not.

      -

      The sources and websites to download lovebird ngekek panjang MP3

      -

      If the owner decides to use MP3 as a masteran, they need to find some reliable and reputable sources and websites that offer lovebird ngekek panjang MP3 files for download or streaming. Some of these sources and websites are:

      -
        -
      • YouTube: This is a popular video-sharing platform that also allows users to download or stream audio files from various videos. The owner can search for lovebird ngekek panjang videos on YouTube and use a YouTube to MP3 converter tool to download or stream the audio files.
      • -
      • SoundCloud: This is a popular audio-sharing platform that also allows users to download or stream audio files from various artists and creators. The owner can search for lovebird ngekek panjang audio files on SoundCloud and use a SoundCloud to MP3 converter tool to download or stream the audio files.
      • -
      • Lovebird Ngekek Panjang: This is a dedicated website that provides lovebird ngekek panjang MP3 files for download or streaming. The owner can browse through various categories and genres of lovebird ngekek panjang MP3 files on this website and download or stream them directly.
      • -
      • Lovebird Masteran: This is another dedicated website that provides lovebird ngekek panjang MP3 files for download or streaming. The owner can also browse through various categories and genres of lovebird ngekek panjang MP3 files on this website and download or stream them directly.
      • -
      -

      The steps and instructions to download lovebird ngekek panjang MP3

      -

      The steps and instructions to download lovebird ngekek panjang MP3 may vary depending on the source or website that the owner uses. However, here are some general steps and instructions that the owner can follow:

      -
        -
      1. Choose a source or website that offers lovebird ngekek panjang MP3 files for download or streaming.
      2. -
      3. Search for the desired type, style, or quality of lovebird ngekek panjang MP3 file on the source or website.
      4. -
      5. Select the desired lovebird ngekek panjang MP3 file from the search results.
      6. -
      7. Click on the download or stream button on the source or website.
      8. -
      9. Wait for the download or stream process to complete.
      10. -
      11. Save the downloaded lovebird ngekek panjang MP3 file on your device or play the streamed lovebird ngekek panjang MP3 file on your device.
      12. -
      13. Play the lovebird ngekek panjang MP3 file for your lovebird as a masteran or a reference.
      14. -
      -

      Conclusion

      -

      Lovebird ngekek panjang is a special type of lovebird that can produce a long and continuous chirping sound that is highly sought after by many bird enthusiasts. It has some meanings and benefits for the owner and the bird itself, as well as some distinctive characteristics and types that set it apart from other types of lovebirds. To train lovebird to ngekek panjang, the owner needs to provide a proper diet and nutrition, use some effective methods and tips, and give some multivitamins and supplements to the bird. The owner can also use MP3 as a masteran or a reference, which has some advantages and disadvantages. The owner can download or stream lovebird ngekek panjang MP3 files from various sources and websites on the internet, following some general steps and instructions. By following this guide, the owner can hopefully achieve their goal of having a lovebird ngekek panjang that can chirp beautifully and melodiously.

      FAQs

      -

      Here are some frequently asked questions and answers about lovebird ngekek panjang:

      -
        -
      1. Q: How long does it take to train a lovebird to ngekek panjang? A: There is no definite answer to this question, as it depends on various factors such as the age, personality, health, and genetics of the bird, as well as the quality, frequency, and consistency of the training. However, some sources suggest that it can take anywhere from a few weeks to a few months to train a lovebird to ngekek panjang.
      2. -
      3. Q: How can I tell if my lovebird is ngekek panjang or not? A: The easiest way to tell if your lovebird is ngekek panjang or not is to listen to its chirping or singing pattern. A lovebird ngekek panjang can chirp or sing for more than one minute without stopping or pausing, while a normal lovebird can only chirp or sing for a few seconds or less. You can also use a stopwatch or a timer to measure the duration of your lovebird's chirping or singing.
      4. -
      5. Q: What are the best times and places to play the lovebird ngekek panjang MP3 for my lovebird? A: The best times and places to play the lovebird ngekek panjang MP3 for your lovebird are when and where your lovebird is most active, alert, and responsive. Some examples are in the morning or evening, when the sun is rising or setting, or in a quiet and comfortable room, where there are no distractions or noises. You should also avoid playing the lovebird ngekek panjang MP3 for your lovebird when it is sleeping, resting, eating, or molting, as it may disturb or annoy your lovebird.
      6. -
      7. Q: How can I prevent my lovebird from getting bored or tired of the lovebird ngekek panjang MP3? A: You can prevent your lovebird from getting bored or tired of the lovebird ngekek panjang MP3 by varying or changing the MP3 files that you use. You can choose different types, styles, and qualities of lovebird ngekek panjang MP3 files that can challenge or stimulate your lovebird's learning and imitation abilities. You can also limit or adjust the volume and duration of the lovebird ngekek panjang MP3 that you play for your lovebird, as too loud or too long may cause stress or fatigue for your lovebird.
      8. -
      9. Q: Can I use other types of birds or sounds as a masteran or a reference for my lovebird? A: Yes, you can use other types of birds or sounds as a masteran or a reference for your lovebird, as long as they are suitable and compatible with your lovebird's voice type and quality. Some examples of other types of birds or sounds that you can use are parrots, cockatiels, canaries, nightingales, whistles, bells, etc. However, you should be careful not to use birds or sounds that are too different or too complex from your desired outcome, as they may confuse or overwhelm your lovebird.
      10. -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Windows 5 The Ultimate Guide to Installing and Using the Latest Version of Windows.md b/spaces/congsaPfin/Manga-OCR/logs/Download Windows 5 The Ultimate Guide to Installing and Using the Latest Version of Windows.md deleted file mode 100644 index 4aff3cf9357815b41ada1ca4b033a4eba6f2840e..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Windows 5 The Ultimate Guide to Installing and Using the Latest Version of Windows.md +++ /dev/null @@ -1,94 +0,0 @@ - -

      How to Download Windows 5: A Guide for Windows Users

      -

      Windows 5 is a fictional version of Windows that was never released by Microsoft. However, some people may be curious about how it would look and work if it existed. In this article, we will show you how to download and install Windows 5 on your computer using a virtual machine or an emulator software. We will also explain what features and functionalities Windows 5 has to offer and what are the pros and cons of downloading it.

      -

      download windows 5


      Download > https://urlca.com/2uO5F6



      -

      What is Windows 5 and why you might want to download it

      -

      Windows 5 is a fictional version of Windows that was never released by Microsoft. It is based on the codename "Longhorn", which was the original name for Windows Vista before it was changed. Windows 5 was supposed to be the successor of Windows XP, but it was canceled due to development issues and security concerns. Some of the features and concepts that were planned for Windows 5 were later incorporated into Windows Vista, Windows 7, and Windows 10.

      -

      Windows 5 has some features that are not available in other versions of Windows, such as:

      -

      A new user interface that removes the start button and uses tiles instead

      -

      Windows 5 has a completely redesigned user interface that is more modern and minimalist than previous versions. It removes the traditional start button and menu, and replaces them with tiles that can display live information and shortcuts. The tiles can be customized and arranged according to your preferences. You can also access a sidebar that contains widgets, such as a clock, a calendar, a weather app, and a media player.

      -

      download windows 5 iso
      -download windows 5 free
      -download windows 5 pro
      -download windows 5 insider preview
      -download windows 5 update
      -download windows 5 media creation tool
      -download windows 5 full version
      -download windows 5 for pc
      -download windows 5 for mac
      -download windows 5 for linux
      -download windows 5 from microsoft
      -download windows 5 online
      -download windows 5 usb
      -download windows 5 dvd
      -download windows 5 bootable usb
      -download windows 5 activator
      -download windows 5 crack
      -download windows 5 product key
      -download windows 5 iso file
      -download windows 5 iso direct link
      -download windows 5 iso torrent
      -download windows 5 iso google drive
      -download windows 5 iso mega
      -download windows 5 iso reddit
      -download windows 5 iso official
      -download windows 5 free trial
      -download windows 5 free full version
      -download windows 5 free upgrade
      -download windows 5 free without product key
      -download windows 5 free with activation key
      -download windows 5 pro iso
      -download windows 5 pro free
      -download windows 5 pro full version
      -download windows 5 pro product key
      -download windows 5 pro activator
      -download windows 5 pro crack
      -download windows 5 pro torrent
      -download windows 5 pro google drive
      -download windows 5 pro mega
      -download windows 5 pro reddit
      -download windows 5 insider preview iso
      -download windows 5 insider preview build
      -download windows 5 insider preview dev channel
      -download windows 5 insider preview beta channel
      -download windows 5 insider preview release date
      -download windows 5 insider preview activation key
      -download windows 5 insider preview feedback hub
      -download windows 5 insider preview blog post[^1^]

      -

      A virtual assistant called Cortana that can answer questions and perform tasks

      -

      Windows 5 introduces a virtual assistant named Cortana, which is similar to Siri or Alexa. Cortana can help you with various tasks, such as searching the web, setting reminders, making appointments, sending emails, playing music, and more. You can interact with Cortana using voice commands or text input. Cortana can also learn from your behavior and preferences, and provide personalized suggestions and recommendations.

      -

      A built-in web browser called Edge that is faster and more secure than Internet Explorer

      -

      Windows 5 comes with a new web browser called Edge, which is designed to be faster, more secure, and more compatible than Internet Explorer. Edge supports modern web standards, such as HTML5, CSS3, and JavaScript. Edge also has features like tabbed browsing, reading mode, dark mode, extensions, and a built-in PDF viewer. Edge can also integrate with Cortana, allowing you to use voice commands or ask questions while browsing the web.

      -

      A compatibility mode that allows you to run applications from older versions of Windows

      -

      Windows 5 has a compatibility mode that lets you run applications that were designed for older versions of Windows, such as Windows XP or Windows 7. This can be useful if you have software that is not compatible with newer versions of Windows or if you want to use nostalgic or retro games or programs. You can enable the compatibility mode by right-clicking on the application icon and selecting "Properties". Then, you can choose the version of Windows that you want to emulate and adjust other settings as needed.

      -

      How to download Windows 5 from the internet

      -

      There are some websites that claim to offer Windows 5 for download, but they are not official or trustworthy sources. You should avoid downloading Windows 5 from these websites because they may contain viruses, malware, or spyware that can harm your computer or steal your personal information. Some of these websites may also ask you to pay money or provide your credit card details in order to download Windows 5, which is a scam.

      -

      The only safe way to download Windows 5 is to use a virtual machine or an emulator that can run Windows 5 on your existing operating system. A virtual machine or an emulator is a software program that creates a separate environment on your computer where you can run another operating system without affecting your main one. This way, you can download and install Windows 5 without risking your computer's security or performance.

      -

      How to install Windows 5 on your computer using a virtual machine or an emulator

      -

      To install Windows 5 on your computer using a virtual machine or an emulator, you need to follow these steps:

      -

      Download and install a virtual machine or an emulator software, such as VirtualBox, VMware, or QEMU

      -

      A virtual machine or an emulator software is a program that allows you to run another operating system on your computer. There are many options available online, but some of the most popular ones are VirtualBox, VMware, and QEMU. You can download and install any of these programs from their official websites. Make sure you choose the version that is compatible with your current operating system.

      -

      Download and extract the Windows 5 ISO file from a reliable source, such as [Windows Never Released Wiki]

      -

      An ISO file is an image file that contains all the data and files of an operating system. You need an ISO file of Windows 5 in order to install it on your virtual machine or emulator. You can download the Windows 5 ISO file from a reliable source, such as [Windows Never Released Wiki], which is a website that collects and archives information and files of unreleased or canceled versions of Windows. You can find the Windows 5 ISO file under the "Longhorn" section of the website. After downloading the ISO file, you need to extract it using a program like WinRAR or 7-Zip.

      -

      Create a new virtual machine or emulator instance and configure its settings, such as memory, disk space, and network

      -

      After installing the virtual machine or emulator software and extracting the Windows 5 ISO file, you need to create a new virtual machine or emulator instance where you will install Windows 5. You can do this by opening the software and clicking on the "New" or "Create" button. You will then need to name your virtual machine or emulator instance and choose the type and version of the operating system that you want to run. In this case, you should choose "Windows" and "Other Windows". You will also need to configure some settings, such as the amount of memory, disk space, and network that you want to allocate for your virtual machine or emulator instance. You can adjust these settings according to your preferences and your computer's specifications.

      -

      Mount the Windows 5 ISO file as a virtual CD-ROM drive and start the installation process

      -

      After creating and configuring your virtual machine or emulator instance, you need to mount the Windows 5 ISO file as a virtual CD-ROM drive. This will allow you to access the files and data of Windows 5 from your virtual machine or emulator instance. You can do this by clicking on the "Settings" or "Options" button of your virtual machine or emulator instance and then selecting the "Storage" or "CD/DVD" option. You will then need to browse and locate the Windows 5 ISO file that you extracted earlier and select it as the source of your virtual CD-ROM drive. You can then click on the "Start" or "Run" button of your virtual machine or emulator instance and wait for it to boot up. You will then see the Windows 5 installation screen on your virtual machine or emulator window.

      -

      Follow the instructions on the screen and complete the installation of Windows 5 on your virtual machine or emulator

      -

      The final step is to follow the instructions on the screen and complete the installation of Windows 5 on your virtual machine or emulator. The installation process is similar to other versions of Windows, where you need to accept the license agreement, choose the installation type, select the partition, format the drive, enter your product key, create a user account, and customize your settings. The installation process may take some time depending on your computer's speed and performance. After the installation is finished, you will be able to use Windows 5 on your virtual machine or emulator.

      -

      Conclusion and FAQs

      -

      In conclusion, Windows 5 is a fictional version of Windows that was never released by Microsoft, but you can still download and install it on your computer using a virtual machine or an emulator software. Windows 5 has some features that are not available in other versions of Windows, such as a new user interface, a virtual assistant, a built-in web browser, and a compatibility mode. However, downloading Windows 5 also has some risks, such as exposing your computer to viruses, malware, or spyware from untrusted sources, or causing compatibility issues or performance problems with your existing operating system or applications. Therefore, you should be careful and cautious when downloading Windows 5 from the internet.

      -

      If you have any questions about downloading Windows 5, here are some FAQs that may help you:

      -

      Q1: Is Windows 5 real?

      -

      A1: No, Windows 5 is not real. It is a fictional version of Windows that was never released by Microsoft.

      -

      Q2: What are the benefits of downloading Windows 5?

      -

      A2: Downloading Windows 5 can be fun and interesting for Windows enthusiasts who want to explore the features and functionalities of a hypothetical version of Windows.

      -

      Q3: What are the risks of downloading Windows 5?

      -

      A3: Downloading Windows 5 from untrusted sources can expose your computer to viruses, malware, or spyware that can damage your system or compromise your privacy. Downloading Windows 5 from trusted sources can still cause compatibility issues or performance problems with your existing operating system or applications.

      -

      Q4: How can I download Windows 5 safely?

      -

      A4: The safest way to download Windows 5 is to use a virtual machine or an emulator software that can run Windows 5 on your existing operating system without affecting your main one .

      -

      Q5: How can I uninstall Windows 5 from my computer?

      -

      A5: You can uninstall Windows 5 from your computer by deleting the virtual machine or emulator instance that contains it. You can also delete the Windows 5 ISO file from your hard drive.

      -

      I hope this article has helped you learn how to download Windows 5 and what to expect from it. If you have any feedback or suggestions, please feel free to leave a comment below. Thank you for reading and have a great day!

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download and Play Diablo Immortal APK on Your iPhone A Comprehensive Review of the Mobile Game.md b/spaces/congsaPfin/Manga-OCR/logs/Download and Play Diablo Immortal APK on Your iPhone A Comprehensive Review of the Mobile Game.md deleted file mode 100644 index 29644db2ab046755852e106549834b443cdee777..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download and Play Diablo Immortal APK on Your iPhone A Comprehensive Review of the Mobile Game.md +++ /dev/null @@ -1,93 +0,0 @@ - -

      Diablo Immortal APK iPhone: How to Play the Latest ARPG on Your Mobile Device

      -

      Diablo Immortal is a new mobile game from Blizzard Entertainment that brings the iconic action role-playing game (ARPG) series to iOS and Android devices. Set between the events of Diablo II and Diablo III, Diablo Immortal lets you explore the dark world of Sanctuary, fight against hordes of demons, collect epic loot, and join forces with other players online. Whether you are a fan of the Diablo franchise or a newcomer to the genre, Diablo Immortal offers a thrilling and immersive experience that you can enjoy anytime, anywhere.

      -

      diablo immortal apk iphone


      Download Zip ››››› https://urlca.com/2uOdEU



      -

      But how can you play Diablo Immortal on your iPhone? In this article, we will show you how to download and install the game, what are the main features and gameplay mechanics, and how to optimize your performance and experience. Let's get started!

      -

      How to Download and Install Diablo Immortal on iPhone

      -

      Diablo Immortal is available for free on the App Store for iPhone and iPad. To download and install the game, you will need to follow these steps:

      -
        -
      1. Open the App Store on your device and search for "Diablo Immortal".
      2. -
      3. Tap on the "Get" button to start downloading the game. You may need to enter your Apple ID password or use Touch ID or Face ID to confirm.
      4. -
      5. Wait for the download to finish and then tap on "Open" to launch the game.
      6. -
      7. Create or log in to your Battle.net account to access the game. You can also link your Facebook or Apple account for easier login.
      8. -
      9. Choose your region and server, agree to the terms of service and privacy policy, and then tap on "Play" to start the game.
      10. -
      -

      Congratulations! You have successfully installed Diablo Immortal on your iPhone. Now you can enjoy the game and explore its features.

      -

      What are the Main Features and Gameplay of Diablo Immortal

      -

      Diablo Immortal is a massively multiplayer online action RPG (MMOARPG) that offers a rich and dynamic gameplay experience. Here are some of the main features and gameplay elements that you can expect from the game:

      -
        -
      • Six iconic classes: Barbarian, Crusader, Demon Hunter, Monk, Necromancer, and Wizard. Each class has its own unique skills, abilities, and playstyle.
      • -
      • A vast open world: From the haunted forests of Wortham to the ancient ruins of Shassar, you can explore various zones with different environments, enemies, quests, and secrets.
      • -
      • An engaging story: You will uncover a new chapter in the Diablo saga as you hunt for the fragments of the corrupted Worldstone and prevent the return of the Lord of Terror.
      • -
      • Endless dungeons: You can challenge yourself in various dungeons that offer different difficulties, objectives, rewards, and enemies. You can also enter rifts that randomly generate dungeons with increased challenge and loot.
      • -
      • Cooperative gameplay: You can team up with other players to take on tougher enemies, complete quests, or join events. You can also chat with other players using voice or text communication.
      • -
      • PvP modes: You can test your skills against other players in various PvP modes, such as Battlegrounds or Cycle of Strife. You can also join a faction called The Shadows that allows you to infiltrate other players' games and sabotage their progress.
      • -
      • Gear progression: You can collect various items that enhance your power, such as weapons, armor, gems, runes, charms, legendary items, and set items. You can also level up your items and customize them with different effects.
      • -
      -

      How to Optimize Your Performance and Experience in Diablo Immortal

      -

      Diablo Immortal is a graphically intensive game that requires a stable internet connection and a compatible device to run smoothly. To optimize your performance and experience in the game, you can try the following tips:

      -
        -
      • Adjust the graphics settings: You can lower the graphics quality, resolution, or frame rate in the game settings to improve your device's performance and battery life. You can also enable or disable other options such as shadows, anti-aliasing, or damage numbers.
      • -
      • Use a Wi-Fi connection: You can avoid lag, disconnects, or data charges by using a reliable Wi-Fi connection instead of cellular data. You can also use a VPN service to reduce latency or bypass geo-restrictions.
      • -
      • Free up storage space: You can delete unnecessary files, apps, or cache from your device to free up storage space and prevent crashes or errors. You can also use a cloud service to backup your data and save space.
      • -
      • Update your device and app: You can check for updates for your device's operating system and the game app to ensure that you have the latest features, bug fixes, and security patches.
      • -
      • Use headphones and a controller: You can enhance your audio and gameplay experience by using headphones and a controller. Headphones can help you hear the game's sound effects and music better, while a controller can give you more precise and comfortable control over your character.
      • -
      -

      By following these tips, you can enjoy Diablo Immortal without any issues or interruptions.

      -

      Conclusion: Summary and Recommendations

      -

      Diablo Immortal is a new mobile game that brings the legendary ARPG series to iOS and Android devices. You can download and install the game for free from the App Store or Google Play Store, and play it on your iPhone or iPad. You can explore the dark world of Sanctuary, fight against demons, collect loot, and join other players online. You can also optimize your performance and experience by adjusting the graphics settings, using a Wi-Fi connection, freeing up storage space, updating your device and app, and using headphones and a controller.

      -

      If you are looking for a fun and immersive mobile game that offers a rich and dynamic gameplay experience, Diablo Immortal is a great choice. You can enjoy the game's features and gameplay mechanics, as well as its engaging story and endless dungeons. You can also test your skills against other players in various PvP modes, or join a faction called The Shadows that allows you to infiltrate other players' games and sabotage their progress.

      -

      We hope that this article has helped you learn more about Diablo Immortal and how to play it on your iPhone. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

      -

      How to download diablo immortal apk for iphone
      -Diablo immortal ios release date and pre-download
      -Diablo immortal iphone controller support and compatibility
      -Diablo immortal app store review and rating
      -Diablo immortal mmoarpg gameplay and features for iphone
      -Diablo immortal iphone system requirements and storage space
      -Diablo immortal cross-platform and cross-progression on iphone
      -Diablo immortal story and lore between diablo 2 and 3
      -Diablo immortal character classes and customization for iphone
      -Diablo immortal open beta and launch date for iphone
      -Diablo immortal iphone tips and tricks for beginners
      -Diablo immortal best iphone controllers and gamepads
      -Diablo immortal iphone graphics and performance optimization
      -Diablo immortal iphone vs android vs pc comparison
      -Diablo immortal iphone online multiplayer and co-op mode
      -Diablo immortal iphone achievements and rewards system
      -Diablo immortal iphone in-app purchases and microtransactions
      -Diablo immortal iphone cheats and hacks to avoid
      -Diablo immortal iphone bugs and issues report
      -Diablo immortal iphone forums and communities to join
      -Diablo immortal iphone wallpapers and fan art
      -Diablo immortal iphone update and patch notes
      -Diablo immortal iphone events and challenges calendar
      -Diablo immortal iphone guides and walkthroughs
      -Diablo immortal iphone livestreams and videos to watch
      -Diablo immortal iphone podcasts and interviews to listen to
      -Diablo immortal iphone news and rumors to follow
      -Diablo immortal iphone memes and jokes to laugh at
      -Diablo immortal iphone skins and outfits to collect
      -Diablo immortal iphone weapons and items to equip
      -Diablo immortal iphone skills and abilities to master
      -Diablo immortal iphone dungeons and raids to explore
      -Diablo immortal iphone bosses and enemies to defeat
      -Diablo immortal iphone quests and missions to complete
      -Diablo immortal iphone factions and alliances to join or fight against
      -Diablo immortal iphone pvp mode and ranking system
      -Diablo immortal iphone leaderboards and statistics to check
      -Diablo immortal iphone feedback and suggestions to share
      -Diablo immortal iphone support and customer service contact
      -Diablo immortal iphone faq and common questions answered

      -

      FAQs: Five Common Questions and Answers about Diablo Immortal

      -

      Q: When will Diablo Immortal be released?

      -

      A: Diablo Immortal does not have an official release date yet, but it is expected to launch sometime in 2023. You can pre-register for the game on the App Store or Google Play Store to get notified when it is available.

      -

      Q: How much storage space does Diablo Immortal require?

      -

      A: Diablo Immortal requires about 4 GB of storage space on your device. However, this may vary depending on your device model and operating system.

      -

      Q: Can I play Diablo Immortal offline?

      -

      A: No, Diablo Immortal requires an internet connection to play. You cannot play the game offline or without logging in to your Battle.net account.

      -

      Q: Can I transfer my progress from Diablo III to Diablo Immortal?

      -

      A: No, Diablo Immortal is a separate game from Diablo III and does not share any progress or data with it. You will need to create a new character and start from scratch in Diablo Immortal.

      -

      Q: Can I play Diablo Immortal with friends who have different devices?

      -

      A: Yes, Diablo Immortal supports cross-platform play between iOS and Android devices. You can play with friends who have different devices as long as they are in the same region and server as you.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Minecraft PE 1.18.10 APK Download - Explore New Biomes and Features with Xbox Live.md b/spaces/congsaPfin/Manga-OCR/logs/Minecraft PE 1.18.10 APK Download - Explore New Biomes and Features with Xbox Live.md deleted file mode 100644 index 3814775d6d1f60a7b7a3c3cb364f2ca560080685..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Minecraft PE 1.18.10 APK Download - Explore New Biomes and Features with Xbox Live.md +++ /dev/null @@ -1,131 +0,0 @@ -
      -

      How to Download 1.18.10 APK for Android Devices

      -

      If you are looking for a way to download and enjoy the latest version of Minecraft PE on your Android device, then you have come to the right place. In this article, we will show you how to download and install 1.18.10 APK, which is the latest update of Minecraft PE that brings many new features and improvements to the game.

      -

      download 1.18.10 apk


      Download ---> https://urlca.com/2uO7k6



      -

      What is 1.18.10 APK?

      -

      1.18.10 APK is the file name of the latest version of Minecraft PE for Android devices. Minecraft PE is a popular sandbox game that allows you to create and explore a virtual world made of blocks, where you can build, craft, mine, fight, and more.

      -

      Features of 1.18.10 APK

      -

      Some of the features that you can enjoy in 1.18.10 APK are:

      -
        -
      • New biomes, such as stony peaks, mountain meadows, dripstone caves, and lush caves.
      • -
      • New mobs, such as frogs, tadpoles, goats, and axolotls.
      • -
      • New items, such as goat horns, slime balls, and sculk blocks.
      • -
      • New graphics engine, called Render Dragon, that improves the lighting and shadows in the game.
      • -
      • Working Xbox Live, that allows you to play online with your friends and access achievements and leaderboards.
      • -
      -

      Benefits of 1.18.10 APK

      -

      Some of the benefits that you can get from downloading and installing 1.18.10 APK are:

      -

      Download Minecraft 1.18.10 Bedrock Edition APK
      -How to download Minecraft PE 1.18.10 with Xbox Live
      -Minecraft 1.18.10 APK free download for Android 2022
      -What's new in Minecraft 1.18.10 Caves and Cliffs Part 2
      -Download Minecraft 1.18.10 full version with frogs and tadpoles
      -Minecraft PE 1.18.10 download link Mediafire
      -Minecraft 1.18.10 APK download with goat horn and slime
      -Download Minecraft Bedrock Edition 1.18.10 for Android
      -Minecraft PE 1.18.10 features and changes
      -Download Minecraft 1.18.10 APK with updated textures and biomes
      -How to install Minecraft 1.18.10 APK on your device
      -Minecraft 1.18.10 APK mod menu download
      -Download Minecraft PE 1.18.10 with new experimental features
      -Minecraft 1.18.10 APK download with performance and stability fixes
      -Download Minecraft Bedrock Edition 1.18.10 with iron golem cracking
      -Minecraft PE 1.18.10 APK download with globe banner pattern
      -Download Minecraft 1.18.10 APK with dripstone caves and lush caves
      -Minecraft 1.18.10 APK download with new music discs and redstone signals
      -Download Minecraft Bedrock Edition 1.18.10 with world settings on Realms
      -Minecraft PE 1.18.10 APK download with gameplay timers and notices

      -
        -
      • Free access to the latest version of Minecraft PE without paying any fees or subscriptions.
      • -
      • No need to root your device or use any third-party apps or tools.
      • -
      • Easy and fast installation process that takes only a few minutes.
      • -
      • Compatible with most Android devices that have at least 4 GB of RAM and Android 8 or higher.
      • -
      • No risk of viruses or malware as the file is safe and secure.
      • -
      -

      How to Download and Install 1.18.10 APK on Android Devices

      -

      To download and install 1.18.10 APK on your Android device, you need to follow these simple steps:

      -

      Step 1: Enable Unknown Sources

      -

      Before you can install any APK file on your device, you need to enable the option that allows you to install apps from unknown sources.

      -

      To do this, go to Settings > Security > Unknown Sources and toggle it on.

      -

      Step 2: Download 1.18.10 APK File

      -

      Next, you need to download the 1.18.10 APK file from a reliable source.

      -

      You can use one of these links to download the file:

      -
        -
      • [Download Minecraft 1.18 Free - Bedrock Edition 1.18 APK](^3^)
      • -
      • [Download Minecraft PE 1.18 Free - Bedrock Edition 1.18 APK](^2^)
      • -
      • [Download Minecraft PE 1.18 Free - Bedrock Edition 1.18 APK](^1^)
      • -
      -

      Make sure you have enough storage space on your device before downloading the file.

      -

      Step 3: Install 1.18.10 APK File

      -

      After you have downloaded the 1.18.10 APK file, you need to install it on your device.

      -

      To do this, locate the file in your device's file manager and tap on it.

      -

      You will see a pop-up window asking you to confirm the installation. Tap on Install and wait for the process to finish.

      -

      Step 4: Launch and Enjoy 1.18.10 APK

      -

      Once the installation is complete, you can launch the game by tapping on its icon on your home screen or app drawer.

      -

      You will see the Minecraft PE logo and the version number 1.18.10 on the loading screen.

      -

      Now you can enjoy the game with all its new features and improvements.

      -

      Comparison of 1.18.10 APK with Other Versions

      -

      If you are wondering how 1.18.10 APK compares with other versions of Minecraft PE, here is a table that shows some of the differences:

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      VersionSizeBiomesMobsItemsGraphicsXbox Live
      1.18.10 APK150 MBNew biomes addedNew mobs addedNew items addedNew graphics engineWorking Xbox Live
      1.17 APK140 MBNo new biomes addedNo new mobs addedNo new items addedNo new graphics engineWorking Xbox Live
      1.16 APK130 MBNo new biomes addedNo new mobs addedNo new items addedNo new graphics engineBuggy Xbox Live
      -

      Advantages of 1.18.10 APK over Other Versions

      -

      As you can see from the table, 1.18.10 APK has many advantages over other versions of Minecraft PE, such as:

      -
        -
      • More content and variety in the game, with new biomes, mobs, items, and graphics.
      • -
      • Better performance and stability, with less bugs and glitches.
      • -
      • More fun and excitement, with more challenges and possibilities.
      • -
      • More social and interactive, with working Xbox Live and online multiplayer.
      • -
      -

      Conclusion

      -

      In conclusion, 1.18.10 APK is the best way to enjoy the latest version of Minecraft PE on your Android device. It is easy to download and install, and it offers many new features and improvements that make the game more fun and engaging. If you are a fan of Minecraft PE, you should not miss this opportunity to download and play 1.18.10 APK.

      -

      FAQs

      -

      Here are some frequently asked questions about 1.18.10 APK:

      -

      Q: Is 1.18.10 APK safe to download and install?

      -

      A: Yes, 1.18.10 APK is safe to download and install, as long as you use one of the links provided in this article. The file is scanned and verified by antivirus software, and it does not contain any viruses or malware.

      -

      Q: Do I need to uninstall the previous version of Minecraft PE before installing 1.18.10 APK?

      -

      A: No, you do not need to uninstall the previous version of Minecraft PE before installing 1.18.10 APK. The new version will overwrite the old one, and you will not lose any data or progress.

      -

      Q: Will 1.18.10 APK work on my device?

      -

      A: 1.18.10 APK will work on most Android devices that have at least 4 GB of RAM and Android 8 or higher. However, some devices may not be compatible or may experience some issues due to different specifications or settings.

      -

      Q: How can I update 1.18.10 APK to the next version?

      -

      A: To update 1.18.10 APK to the next version, you will need to download and install the new APK file when it becomes available. You can check this article for updates or follow the official Minecraft PE website for news and announcements.

      -

      Q: How can I contact the developer of 1.18.10 APK?

      -

      A: The developer of 1.18.10 APK is Mojang Studios, the same developer of the original Minecraft PE game. You can contact them through their website or their social media accounts.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Tic Tac Toe Vectors Photos and PSD files - Free for Commercial Use.md b/spaces/congsaPfin/Manga-OCR/logs/Tic Tac Toe Vectors Photos and PSD files - Free for Commercial Use.md deleted file mode 100644 index 41d15ddfbce3d136ee39a7889d9ec0bbabc9d021..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Tic Tac Toe Vectors Photos and PSD files - Free for Commercial Use.md +++ /dev/null @@ -1,160 +0,0 @@ -
      -

      Tic Tac Toe Icon Free Download: A Guide for Gamers and Designers

      -

      Tic Tac Toe is one of the most popular and classic games that can be played by anyone, anywhere, anytime. Whether you want to play it on a paper, a board, or a computer, you will need a tic tac toe icon to represent your moves. In this article, we will show you what tic tac toe is, how to play it, where to find and download tic tac toe icons for free, and how to use and customize them for your own purposes.

      -

      tic tac toe icon free download


      DOWNLOAD ►►► https://urlca.com/2uOgs3



      -

      What is Tic Tac Toe and How to Play It

      -

      Tic Tac Toe is a simple yet fun game that involves two players who take turns marking the spaces in a 3x3 grid with X or O. The player who succeeds in placing three of their marks in a horizontal, vertical, or diagonal row wins the game. It is also known as noughts and crosses or Xs and Os in some countries.

      -

      The History and Origin of Tic Tac Toe

      -

      Tic Tac Toe has a long and rich history that dates back to ancient times. According to some sources, the game was played in ancient Egypt on roofing tiles around 1300 BC. Another source claims that the game originated from the Roman Empire in the first century BC, where it was called terni lapilli (three pebbles at a time) and each player had only three pieces that they had to move around on the grid. The game was also found in ancient India, China, Japan, and other cultures. The name tic tac toe came from the sound of the pencil or chalk marking the board.

      -

      The Rules and Strategies of Tic Tac Toe

      -

      The rules of tic tac toe are simple and easy to follow. Here are the basic steps to play the game:

      -

      tic tac toe vector icon free download
      -free tic tac toe game icon png
      -tic tac toe symbol icon free download svg
      -tic tac toe logo icon free download eps
      -free tic tac toe icon set for web design
      -tic tac toe grid icon free download psd
      -free tic tac toe icon pack for android
      -tic tac toe x and o icon free download
      -free tic tac toe icon collection for ios
      -tic tac toe board icon free download ai
      -free tic tac toe icon bundle for windows
      -tic tac toe flat icon free download pdf
      -free tic tac toe icon kit for mac
      -tic tac toe outline icon free download zip
      -free tic tac toe icon theme for wordpress
      -tic tac toe 3d icon free download jpg
      -free tic tac toe icon font for html
      -tic tac toe circle icon free download gif
      -free tic tac toe icon generator online
      -tic tac toe square icon free download bmp
      -free tic tac toe icon maker software
      -tic tac toe cross icon free download tiff
      -free tic tac toe icon editor app
      -tic tac toe line icon free download cdr
      -free tic tac toe icon creator tool
      -tic tac toe pixel art icon free download
      -free tic tac toe icon design template
      -tic tac toe emoji icon free download png
      -free tic tac toe icon style guide
      -tic tac toe clipart icon free download svg
      -free tic tac toe icon mockup psd
      -tic tac toe doodle icon free download eps
      -free tic tac toe icon animation gif
      -tic tac toe sketch icon free download ai
      -free tic tac toe icon sticker png
      -tic tac toe cartoon icon free download pdf
      -free tic tac toe icon wallpaper jpg
      -tic tac toe hand drawn icon free download zip
      -free tic tac toe icon cursor png
      -tic tac toe minimalistic icon free download jpg
      -free tic tac toe icon badge png
      -tic tac toe colorful icon free download svg
      -free tic tac toe icon button png
      -tic tac toe gradient icon free download eps
      -free tic tac toe icon logo png
      -tic tac toe neon icon free download ai
      -free tic tac toe icon favicon png
      -tic tac toe shadow icon free download pdf

      -
        -
      1. Draw a 3x3 grid on a paper, a board, or a computer screen.
      2. -
      3. Decide who will play as X and who will play as O. The player who plays as X goes first.
      4. -
      5. Take turns placing your mark (X or O) on an empty space in the grid.
      6. -
      7. The first player who gets three of their marks in a row (horizontally, vertically, or diagonally) wins the game.
      8. -
      9. If all nine spaces are filled and no one has three in a row, the game is a draw.
      10. -
      -

      Although tic tac toe seems like a game of chance, there are some strategies that can help you win or at least draw every time you play. Here are some tips to improve your skills:

      -
        -
      • If you go first, place your X in the center or in a corner. This will give you more chances to create a row of three.
      • -
      • If you go second, place your O in the center if it is empty. This will block your opponent from making a row of three.
      • -
      • Try to create a fork, which is a position where you have two ways to win on your next move. Your opponent can only block one of them.
      • -
      • Try to block your opponent from creating a fork or a row of three. Pay attention to their moves and anticipate their next move.
      • -
      • If there are no moves that can help you win or block your opponent, choose any empty space.
      • -
      -

      The Variations and Benefits of Tic Tac Toe

      -

      Tic tac toe can be modified and adapted to make it more challenging or interesting. Here are some variations of tic tac toe that you can try:

      -
        -
      • - Play tic tac toe on a larger grid, such as 4x4, 5x5, or even 10x10. The rules are the same, but you need to get four or five in a row to win. This will make the game more complex and strategic.

        -

        - Play tic tac toe with more than two players. You can have three or four players who each use a different mark, such as X, O, *, and #. The rules are the same, but you need to get three of your marks in a row to win. This will make the game more competitive and chaotic.

        -

        - Play tic tac toe with different shapes, such as circles, squares, triangles, and stars. You can use different colors or patterns to distinguish them. The rules are the same, but you need to get three of the same shape in a row to win. This will make the game more colorful and creative.

        -

        Tic tac toe is not only a fun game, but also a beneficial one. Here are some benefits of playing tic tac toe:

        -
          -
        • It improves your logical thinking and problem-solving skills. You have to plan your moves and anticipate your opponent's moves.
        • -
        • It enhances your memory and concentration. You have to remember the state of the board and focus on your next move.
        • -
        • It develops your social and communication skills. You can play tic tac toe with your friends, family, or strangers and have a friendly chat or a lively debate.
        • -
        • It reduces your stress and boredom. You can play tic tac toe anytime, anywhere, and with anyone. It is a simple and relaxing way to pass the time and have fun.
        • -
        -

        Where to Find and Download Tic Tac Toe Icons for Free

        -

        If you want to play tic tac toe on your computer or design your own tic tac toe game, you will need some tic tac toe icons to represent the marks on the grid. Fortunately, there are many websites that offer free tic tac toe icons that you can download and use for personal or commercial purposes. Here are some of the best websites for free tic tac toe icons:

        -

        The Best Websites for Free Tic Tac Toe Icons

        - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
        WebsiteDescriptionExample
        FlaticonFlaticon is one of the largest databases of free icons in various formats, such as PNG, SVG, EPS, PSD, and BASE 64. You can find over 200 tic tac toe icons in different styles and colors. You can also edit them online or download them as a pack.Tic Tac Toe Icon from Flaticon
        IconfinderIconfinder is another popular website that provides free icons in various formats, such as PNG, SVG, ICO, ICNS, and BMP. You can find over 100 tic tac toe icons in different styles and sizes. You can also customize them online or download them individually or as a set.Tic Tac Toe Icon from Iconfinder
        Icons8Icons8 is a website that offers free icons in various formats, such as PNG, SVG, PDF, EPS, and PSD. You can find over 50 tic tac toe icons in different styles and colors. You can also change their size, color, orientation, and background online or download them as you like.Tic Tac Toe Icon from Icons8
        FreepikFreepik is a website that provides free vectors, photos, PSD files, and icons in various formats, such as AI, EPS, SVG, and PNG. You can find over 40 tic tac toe icons in different styles and colors. You can also edit them online or download them as you wish.Tic Tac Toe Icon from Freepik
        VecteezyVecteezy is a website that offers free vector graphics, illustrations, and icons in various formats, such as AI, EPS, SVG, and PNG. You can find over 30 tic tac toe icons in different styles and colors. You can also modify them online or download them as you prefer.Tic Tac Toe Icon from Vecteezy
        -

        The Different Formats and Styles of Tic Tac Toe Icons

        -

        Tic tac toe icons can come in different formats and styles to suit your needs and preferences. Here are some of the common formats and styles of tic tac toe icons:

        -
          -
        • PNG: PNG stands for Portable Network Graphics, and it is a raster image format that supports lossless compression and transparency. PNG is a widely used format for web graphics and icons, as it can preserve the quality and details of the image. PNG tic tac toe icons are suitable for displaying on screens and devices with different resolutions and sizes.
        • -
        • SVG: SVG stands for Scalable Vector Graphics, and it is a vector image format that supports interactivity and animation. SVG is a modern format for web graphics and icons, as it can scale up or down without losing quality or clarity. SVG tic tac toe icons are suitable for creating responsive and dynamic designs and games.
        • -
        • EPS: EPS stands for Encapsulated PostScript, and it is a vector image format that supports high-quality printing and editing. EPS is a professional format for graphics and icons, as it can retain the original appearance and features of the image. EPS tic tac toe icons are suitable for printing on paper, posters, or stickers.
        • -
        • PSD: PSD stands for Photoshop Document, and it is a layered image format that supports advanced editing and effects. PSD is a proprietary format for Adobe Photoshop, a popular software for graphic design and photo editing. PSD tic tac toe icons are suitable for creating custom and unique designs and games.
        • -
        • Flat: Flat tic tac toe icons are icons that have a simple and minimalistic style, with no shadows, gradients, or textures. Flat tic tac toe icons are suitable for creating a modern and clean look and feel.
        • -
        • 3D: 3D tic tac toe icons are icons that have a realistic and dimensional style, with shadows, lights, and perspectives. 3D tic tac toe icons are suitable for creating a vivid and immersive experience.
        • -
        • Outline: Outline tic tac toe icons are icons that have only the outline of the shape, with no fill or color. Outline tic tac toe icons are suitable for creating a subtle and elegant impression.
        • -
        • Filled: Filled tic tac toe icons are icons that have both the outline and the fill of the shape, with one or more colors. Filled tic tac toe icons are suitable for creating a bold and colorful expression.
        • -
        -

        How to Use and Customize Tic Tac Toe Icons

        -

        Tic tac toe icons can be used and customized for various purposes, such as playing tic tac toe online or offline, designing your own tic tac toe game or app, creating logos or stickers, or decorating your website or blog. Here are some ways to use and customize tic tac toe icons:

        -
          -
        • To play tic tac toe online, you can use any of the websites that offer free tic tac toe games or apps, such as TicTacToe.com, TicTacToe.fun, or TicTacToe.net. You can choose from different modes, levels, themes, and sounds. You can also invite your friends to play with you or play against the computer.
        • -
        • To play tic tac toe offline, you can print out any of the free tic tac toe icons from the websites mentioned above or create your own using any image editing software. You can cut out the icons and use them as markers on a paper or board grid. You can also laminate them or stick them on magnets to make them more durable.
        • -
        • To design your own tic tac toe game or app, you can use any of the free tic tac toe icons from the websites mentioned above or create your own using any image editing software. You can customize the size, color, style, and orientation of the icons to fit your theme and concept. You can also add animations, sounds, effects, or features to make your game or app more interactive and fun.
        • -
        • To create logos or stickers, you can use any of the free tic tac toe icons from the websites mentioned above or create your own using any image editing software. You can customize the shape, color, style, and text of the icons to match your brand identity or message. You can also add filters, gradients, or patterns to make your logos or stickers more attractive.
        • To decorate your website or blog, you can use any of the free tic tac toe icons from the websites mentioned above or create your own using any image editing software. You can customize the size, color, style, and position of the icons to fit your layout and design. You can also add links, hover effects, or pop-ups to make your website or blog more engaging and interactive. -
        -

        Conclusion and FAQs

        -

        Conclusion

        -

        Tic tac toe is a simple yet fun game that can be played by anyone, anywhere, anytime. You can find and download tic tac toe icons for free from various websites and use them for different purposes. You can also customize them to suit your needs and preferences. Tic tac toe icons can help you play tic tac toe online or offline, design your own tic tac toe game or app, create logos or stickers, or decorate your website or blog. Tic tac toe icons can also improve your logical thinking, memory, concentration, social skills, and creativity. Tic tac toe icons are a great way to enjoy tic tac toe and express yourself.

        -

        FAQs

        -

        Here are some frequently asked questions about tic tac toe icons:

        -
          -
        1. Q: How do I download tic tac toe icons for free?
        2. -
        3. A: You can download tic tac toe icons for free from various websites that offer free icons in different formats and styles. Some of the best websites for free tic tac toe icons are Flaticon, Iconfinder, Icons8, Freepik, and Vecteezy. You can also create your own tic tac toe icons using any image editing software.
        4. -
        5. Q: How do I use tic tac toe icons for playing tic tac toe?
        6. -
        7. A: You can use tic tac toe icons for playing tic tac toe online or offline. To play tic tac toe online, you can use any of the websites that offer free tic tac toe games or apps, such as TicTacToe.com, TicTacToe.fun, or TicTacToe.net. To play tic tac toe offline, you can print out any of the free tic tac toe icons from the websites mentioned above or create your own using any image editing software. You can cut out the icons and use them as markers on a paper or board grid.
        8. -
        9. Q: How do I use tic tac toe icons for designing my own tic tac toe game or app?
        10. -
        11. A: You can use tic tac toe icons for designing your own tic tac toe game or app using any image editing software. You can customize the size, color, style, and orientation of the icons to fit your theme and concept. You can also add animations, sounds, effects, or features to make your game or app more interactive and fun.
        12. -
        13. Q: How do I use tic tac toe icons for creating logos or stickers?
        14. -
        15. A: You can use tic tac toe icons for creating logos or stickers using any image editing software. You can customize the shape, color, style, and text of the icons to match your brand identity or message. You can also add filters, gradients, or patterns to make your logos or stickers more attractive.
        16. -
        17. Q: How do I use tic tac toe icons for decorating my website or blog?
        18. -
        19. A: You can use tic tac toe icons for decorating your website or blog using any web design software. You can customize the size, color, style, and position of the icons to fit your layout and design. You can also add links, hover effects, or pop-ups to make your website or blog more engaging and interactive.
        20. -

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Bachna Ae Haseeno Full Movie English Subtitle Free Download.md b/spaces/contluForse/HuggingGPT/assets/Bachna Ae Haseeno Full Movie English Subtitle Free Download.md deleted file mode 100644 index 2cb81b3896c59548e0e632d0fdcf94c7d4c01be7..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Bachna Ae Haseeno Full Movie English Subtitle Free Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Bachna Ae Haseeno full movie english subtitle free download


        Downloadhttps://ssurll.com/2uzydr



        - - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/contluForse/HuggingGPT/assets/Devil May Cry Tamil Dubbed Movie Download Join the Demon Hunters in Their Quest to Save the World.md b/spaces/contluForse/HuggingGPT/assets/Devil May Cry Tamil Dubbed Movie Download Join the Demon Hunters in Their Quest to Save the World.md deleted file mode 100644 index 2f9e99c48e3466b0e8d00355ca716663ca5fc74d..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Devil May Cry Tamil Dubbed Movie Download Join the Demon Hunters in Their Quest to Save the World.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Devil May Cry tamil dubbed movie download


        Download Ziphttps://ssurll.com/2uzvtI



        - - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/parallel/collate.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/parallel/collate.py deleted file mode 100644 index ad749197df21b0d74297548be5f66a696adebf7f..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/parallel/collate.py +++ /dev/null @@ -1,84 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from collections.abc import Mapping, Sequence - -import torch -import torch.nn.functional as F -from torch.utils.data.dataloader import default_collate - -from .data_container import DataContainer - - -def collate(batch, samples_per_gpu=1): - """Puts each data field into a tensor/DataContainer with outer dimension - batch size. - - Extend default_collate to add support for - :type:`~mmcv.parallel.DataContainer`. There are 3 cases. - - 1. cpu_only = True, e.g., meta data - 2. cpu_only = False, stack = True, e.g., images tensors - 3. cpu_only = False, stack = False, e.g., gt bboxes - """ - - if not isinstance(batch, Sequence): - raise TypeError(f'{batch.dtype} is not supported.') - - if isinstance(batch[0], DataContainer): - stacked = [] - if batch[0].cpu_only: - for i in range(0, len(batch), samples_per_gpu): - stacked.append( - [sample.data for sample in batch[i:i + samples_per_gpu]]) - return DataContainer( - stacked, batch[0].stack, batch[0].padding_value, cpu_only=True) - elif batch[0].stack: - for i in range(0, len(batch), samples_per_gpu): - assert isinstance(batch[i].data, torch.Tensor) - - if batch[i].pad_dims is not None: - ndim = batch[i].dim() - assert ndim > batch[i].pad_dims - max_shape = [0 for _ in range(batch[i].pad_dims)] - for dim in range(1, batch[i].pad_dims + 1): - max_shape[dim - 1] = batch[i].size(-dim) - for sample in batch[i:i + samples_per_gpu]: - for dim in range(0, ndim - batch[i].pad_dims): - assert batch[i].size(dim) == sample.size(dim) - for dim in range(1, batch[i].pad_dims + 1): - max_shape[dim - 1] = max(max_shape[dim - 1], - sample.size(-dim)) - padded_samples = [] - for sample in batch[i:i + samples_per_gpu]: - pad = [0 for _ in range(batch[i].pad_dims * 2)] - for dim in range(1, batch[i].pad_dims + 1): - pad[2 * dim - - 1] = max_shape[dim - 1] - sample.size(-dim) - padded_samples.append( - F.pad( - sample.data, pad, value=sample.padding_value)) - stacked.append(default_collate(padded_samples)) - elif batch[i].pad_dims is None: - stacked.append( - default_collate([ - sample.data - for sample in batch[i:i + samples_per_gpu] - ])) - else: - raise ValueError( - 'pad_dims should be either None or integers (1-3)') - - else: - for i in range(0, len(batch), samples_per_gpu): - stacked.append( - [sample.data for sample in batch[i:i + samples_per_gpu]]) - return DataContainer(stacked, batch[0].stack, batch[0].padding_value) - elif isinstance(batch[0], Sequence): - transposed = zip(*batch) - return [collate(samples, samples_per_gpu) for samples in transposed] - elif isinstance(batch[0], Mapping): - return { - key: collate([d[key] for d in batch], samples_per_gpu) - for key in batch[0] - } - else: - return default_collate(batch) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/models/submodules/efficientnet_repo/onnx_export.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/models/submodules/efficientnet_repo/onnx_export.py deleted file mode 100644 index 7a5162ce214830df501bdb81edb66c095122f69d..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/normalbae/models/submodules/efficientnet_repo/onnx_export.py +++ /dev/null @@ -1,120 +0,0 @@ -""" ONNX export script - -Export PyTorch models as ONNX graphs. - -This export script originally started as an adaptation of code snippets found at -https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html - -The default parameters work with PyTorch 1.6 and ONNX 1.7 and produce an optimal ONNX graph -for hosting in the ONNX runtime (see onnx_validate.py). To export an ONNX model compatible -with caffe2 (see caffe2_benchmark.py and caffe2_validate.py), the --keep-init and --aten-fallback -flags are currently required. - -Older versions of PyTorch/ONNX (tested PyTorch 1.4, ONNX 1.5) do not need extra flags for -caffe2 compatibility, but they produce a model that isn't as fast running on ONNX runtime. - -Most new release of PyTorch and ONNX cause some sort of breakage in the export / usage of ONNX models. -Please do your research and search ONNX and PyTorch issue tracker before asking me. Thanks. - -Copyright 2020 Ross Wightman -""" -import argparse -import torch -import numpy as np - -import onnx -import geffnet - -parser = argparse.ArgumentParser(description='PyTorch ImageNet Validation') -parser.add_argument('output', metavar='ONNX_FILE', - help='output model filename') -parser.add_argument('--model', '-m', metavar='MODEL', default='mobilenetv3_large_100', - help='model architecture (default: mobilenetv3_large_100)') -parser.add_argument('--opset', type=int, default=10, - help='ONNX opset to use (default: 10)') -parser.add_argument('--keep-init', action='store_true', default=False, - help='Keep initializers as input. Needed for Caffe2 compatible export in newer PyTorch/ONNX.') -parser.add_argument('--aten-fallback', action='store_true', default=False, - help='Fallback to ATEN ops. Helps fix AdaptiveAvgPool issue with Caffe2 in newer PyTorch/ONNX.') -parser.add_argument('--dynamic-size', action='store_true', default=False, - help='Export model width dynamic width/height. Not recommended for "tf" models with SAME padding.') -parser.add_argument('-b', '--batch-size', default=1, type=int, - metavar='N', help='mini-batch size (default: 1)') -parser.add_argument('--img-size', default=None, type=int, - metavar='N', help='Input image dimension, uses model default if empty') -parser.add_argument('--mean', type=float, nargs='+', default=None, metavar='MEAN', - help='Override mean pixel value of dataset') -parser.add_argument('--std', type=float, nargs='+', default=None, metavar='STD', - help='Override std deviation of of dataset') -parser.add_argument('--num-classes', type=int, default=1000, - help='Number classes in dataset') -parser.add_argument('--checkpoint', default='', type=str, metavar='PATH', - help='path to checkpoint (default: none)') - - -def main(): - args = parser.parse_args() - - args.pretrained = True - if args.checkpoint: - args.pretrained = False - - print("==> Creating PyTorch {} model".format(args.model)) - # NOTE exportable=True flag disables autofn/jit scripted activations and uses Conv2dSameExport layers - # for models using SAME padding - model = geffnet.create_model( - args.model, - num_classes=args.num_classes, - in_chans=3, - pretrained=args.pretrained, - checkpoint_path=args.checkpoint, - exportable=True) - - model.eval() - - example_input = torch.randn((args.batch_size, 3, args.img_size or 224, args.img_size or 224), requires_grad=True) - - # Run model once before export trace, sets padding for models with Conv2dSameExport. This means - # that the padding for models with Conv2dSameExport (most models with tf_ prefix) is fixed for - # the input img_size specified in this script. - # Opset >= 11 should allow for dynamic padding, however I cannot get it to work due to - # issues in the tracing of the dynamic padding or errors attempting to export the model after jit - # scripting it (an approach that should work). Perhaps in a future PyTorch or ONNX versions... - model(example_input) - - print("==> Exporting model to ONNX format at '{}'".format(args.output)) - input_names = ["input0"] - output_names = ["output0"] - dynamic_axes = {'input0': {0: 'batch'}, 'output0': {0: 'batch'}} - if args.dynamic_size: - dynamic_axes['input0'][2] = 'height' - dynamic_axes['input0'][3] = 'width' - if args.aten_fallback: - export_type = torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK - else: - export_type = torch.onnx.OperatorExportTypes.ONNX - - torch_out = torch.onnx._export( - model, example_input, args.output, export_params=True, verbose=True, input_names=input_names, - output_names=output_names, keep_initializers_as_inputs=args.keep_init, dynamic_axes=dynamic_axes, - opset_version=args.opset, operator_export_type=export_type) - - print("==> Loading and checking exported model from '{}'".format(args.output)) - onnx_model = onnx.load(args.output) - onnx.checker.check_model(onnx_model) # assuming throw on error - print("==> Passed") - - if args.keep_init and args.aten_fallback: - import caffe2.python.onnx.backend as onnx_caffe2 - # Caffe2 loading only works properly in newer PyTorch/ONNX combos when - # keep_initializers_as_inputs and aten_fallback are set to True. - print("==> Loading model into Caffe2 backend and comparing forward pass.".format(args.output)) - caffe2_backend = onnx_caffe2.prepare(onnx_model) - B = {onnx_model.graph.input[0].name: x.data.numpy()} - c2_out = caffe2_backend.run(B)[0] - np.testing.assert_almost_equal(torch_out.data.numpy(), c2_out, decimal=5) - print("==> Passed") - - -if __name__ == '__main__': - main() diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/configs/_base_/models/emanet_r50-d8.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/configs/_base_/models/emanet_r50-d8.py deleted file mode 100644 index 26adcd430926de0862204a71d345f2543167f27b..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/configs/_base_/models/emanet_r50-d8.py +++ /dev/null @@ -1,47 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='EMAHead', - in_channels=2048, - in_index=3, - channels=256, - ema_channels=512, - num_bases=64, - num_stages=3, - momentum=0.1, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/utils/config.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/utils/config.py deleted file mode 100644 index 84996564663dadf0e720de2a68ef8c53106ed666..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/utils/config.py +++ /dev/null @@ -1,437 +0,0 @@ -# MIT License - -# Copyright (c) 2022 Intelligent Systems Lab Org - -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: - -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. - -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - -# File author: Shariq Farooq Bhat - -import json -import os - -from .easydict import EasyDict as edict -from .arg_utils import infer_type - -import pathlib -import platform - -ROOT = pathlib.Path(__file__).parent.parent.resolve() - -HOME_DIR = os.path.expanduser("~") - -COMMON_CONFIG = { - "save_dir": os.path.expanduser("~/shortcuts/monodepth3_checkpoints"), - "project": "ZoeDepth", - "tags": '', - "notes": "", - "gpu": None, - "root": ".", - "uid": None, - "print_losses": False -} - -DATASETS_CONFIG = { - "kitti": { - "dataset": "kitti", - "min_depth": 0.001, - "max_depth": 80, - "data_path": os.path.join(HOME_DIR, "shortcuts/datasets/kitti/raw"), - "gt_path": os.path.join(HOME_DIR, "shortcuts/datasets/kitti/gts"), - "filenames_file": "./train_test_inputs/kitti_eigen_train_files_with_gt.txt", - "input_height": 352, - "input_width": 1216, # 704 - "data_path_eval": os.path.join(HOME_DIR, "shortcuts/datasets/kitti/raw"), - "gt_path_eval": os.path.join(HOME_DIR, "shortcuts/datasets/kitti/gts"), - "filenames_file_eval": "./train_test_inputs/kitti_eigen_test_files_with_gt.txt", - - "min_depth_eval": 1e-3, - "max_depth_eval": 80, - - "do_random_rotate": True, - "degree": 1.0, - "do_kb_crop": True, - "garg_crop": True, - "eigen_crop": False, - "use_right": False - }, - "kitti_test": { - "dataset": "kitti", - "min_depth": 0.001, - "max_depth": 80, - "data_path": os.path.join(HOME_DIR, "shortcuts/datasets/kitti/raw"), - "gt_path": os.path.join(HOME_DIR, "shortcuts/datasets/kitti/gts"), - "filenames_file": "./train_test_inputs/kitti_eigen_train_files_with_gt.txt", - "input_height": 352, - "input_width": 1216, - "data_path_eval": os.path.join(HOME_DIR, "shortcuts/datasets/kitti/raw"), - "gt_path_eval": os.path.join(HOME_DIR, "shortcuts/datasets/kitti/gts"), - "filenames_file_eval": "./train_test_inputs/kitti_eigen_test_files_with_gt.txt", - - "min_depth_eval": 1e-3, - "max_depth_eval": 80, - - "do_random_rotate": False, - "degree": 1.0, - "do_kb_crop": True, - "garg_crop": True, - "eigen_crop": False, - "use_right": False - }, - "nyu": { - "dataset": "nyu", - "avoid_boundary": False, - "min_depth": 1e-3, # originally 0.1 - "max_depth": 10, - "data_path": os.path.join(HOME_DIR, "shortcuts/datasets/nyu_depth_v2/sync/"), - "gt_path": os.path.join(HOME_DIR, "shortcuts/datasets/nyu_depth_v2/sync/"), - "filenames_file": "./train_test_inputs/nyudepthv2_train_files_with_gt.txt", - "input_height": 480, - "input_width": 640, - "data_path_eval": os.path.join(HOME_DIR, "shortcuts/datasets/nyu_depth_v2/official_splits/test/"), - "gt_path_eval": os.path.join(HOME_DIR, "shortcuts/datasets/nyu_depth_v2/official_splits/test/"), - "filenames_file_eval": "./train_test_inputs/nyudepthv2_test_files_with_gt.txt", - "min_depth_eval": 1e-3, - "max_depth_eval": 10, - "min_depth_diff": -10, - "max_depth_diff": 10, - - "do_random_rotate": True, - "degree": 1.0, - "do_kb_crop": False, - "garg_crop": False, - "eigen_crop": True - }, - "ibims": { - "dataset": "ibims", - "ibims_root": os.path.join(HOME_DIR, "shortcuts/datasets/ibims/ibims1_core_raw/"), - "eigen_crop": True, - "garg_crop": False, - "do_kb_crop": False, - "min_depth_eval": 0, - "max_depth_eval": 10, - "min_depth": 1e-3, - "max_depth": 10 - }, - "sunrgbd": { - "dataset": "sunrgbd", - "sunrgbd_root": os.path.join(HOME_DIR, "shortcuts/datasets/SUNRGBD/test/"), - "eigen_crop": True, - "garg_crop": False, - "do_kb_crop": False, - "min_depth_eval": 0, - "max_depth_eval": 8, - "min_depth": 1e-3, - "max_depth": 10 - }, - "diml_indoor": { - "dataset": "diml_indoor", - "diml_indoor_root": os.path.join(HOME_DIR, "shortcuts/datasets/diml_indoor_test/"), - "eigen_crop": True, - "garg_crop": False, - "do_kb_crop": False, - "min_depth_eval": 0, - "max_depth_eval": 10, - "min_depth": 1e-3, - "max_depth": 10 - }, - "diml_outdoor": { - "dataset": "diml_outdoor", - "diml_outdoor_root": os.path.join(HOME_DIR, "shortcuts/datasets/diml_outdoor_test/"), - "eigen_crop": False, - "garg_crop": True, - "do_kb_crop": False, - "min_depth_eval": 2, - "max_depth_eval": 80, - "min_depth": 1e-3, - "max_depth": 80 - }, - "diode_indoor": { - "dataset": "diode_indoor", - "diode_indoor_root": os.path.join(HOME_DIR, "shortcuts/datasets/diode_indoor/"), - "eigen_crop": True, - "garg_crop": False, - "do_kb_crop": False, - "min_depth_eval": 1e-3, - "max_depth_eval": 10, - "min_depth": 1e-3, - "max_depth": 10 - }, - "diode_outdoor": { - "dataset": "diode_outdoor", - "diode_outdoor_root": os.path.join(HOME_DIR, "shortcuts/datasets/diode_outdoor/"), - "eigen_crop": False, - "garg_crop": True, - "do_kb_crop": False, - "min_depth_eval": 1e-3, - "max_depth_eval": 80, - "min_depth": 1e-3, - "max_depth": 80 - }, - "hypersim_test": { - "dataset": "hypersim_test", - "hypersim_test_root": os.path.join(HOME_DIR, "shortcuts/datasets/hypersim_test/"), - "eigen_crop": True, - "garg_crop": False, - "do_kb_crop": False, - "min_depth_eval": 1e-3, - "max_depth_eval": 80, - "min_depth": 1e-3, - "max_depth": 10 - }, - "vkitti": { - "dataset": "vkitti", - "vkitti_root": os.path.join(HOME_DIR, "shortcuts/datasets/vkitti_test/"), - "eigen_crop": False, - "garg_crop": True, - "do_kb_crop": True, - "min_depth_eval": 1e-3, - "max_depth_eval": 80, - "min_depth": 1e-3, - "max_depth": 80 - }, - "vkitti2": { - "dataset": "vkitti2", - "vkitti2_root": os.path.join(HOME_DIR, "shortcuts/datasets/vkitti2/"), - "eigen_crop": False, - "garg_crop": True, - "do_kb_crop": True, - "min_depth_eval": 1e-3, - "max_depth_eval": 80, - "min_depth": 1e-3, - "max_depth": 80, - }, - "ddad": { - "dataset": "ddad", - "ddad_root": os.path.join(HOME_DIR, "shortcuts/datasets/ddad/ddad_val/"), - "eigen_crop": False, - "garg_crop": True, - "do_kb_crop": True, - "min_depth_eval": 1e-3, - "max_depth_eval": 80, - "min_depth": 1e-3, - "max_depth": 80, - }, -} - -ALL_INDOOR = ["nyu", "ibims", "sunrgbd", "diode_indoor", "hypersim_test"] -ALL_OUTDOOR = ["kitti", "diml_outdoor", "diode_outdoor", "vkitti2", "ddad"] -ALL_EVAL_DATASETS = ALL_INDOOR + ALL_OUTDOOR - -COMMON_TRAINING_CONFIG = { - "dataset": "nyu", - "distributed": True, - "workers": 16, - "clip_grad": 0.1, - "use_shared_dict": False, - "shared_dict": None, - "use_amp": False, - - "aug": True, - "random_crop": False, - "random_translate": False, - "translate_prob": 0.2, - "max_translation": 100, - - "validate_every": 0.25, - "log_images_every": 0.1, - "prefetch": False, -} - - -def flatten(config, except_keys=('bin_conf')): - def recurse(inp): - if isinstance(inp, dict): - for key, value in inp.items(): - if key in except_keys: - yield (key, value) - if isinstance(value, dict): - yield from recurse(value) - else: - yield (key, value) - - return dict(list(recurse(config))) - - -def split_combined_args(kwargs): - """Splits the arguments that are combined with '__' into multiple arguments. - Combined arguments should have equal number of keys and values. - Keys are separated by '__' and Values are separated with ';'. - For example, '__n_bins__lr=256;0.001' - - Args: - kwargs (dict): key-value pairs of arguments where key-value is optionally combined according to the above format. - - Returns: - dict: Parsed dict with the combined arguments split into individual key-value pairs. - """ - new_kwargs = dict(kwargs) - for key, value in kwargs.items(): - if key.startswith("__"): - keys = key.split("__")[1:] - values = value.split(";") - assert len(keys) == len( - values), f"Combined arguments should have equal number of keys and values. Keys are separated by '__' and Values are separated with ';'. For example, '__n_bins__lr=256;0.001. Given (keys,values) is ({keys}, {values})" - for k, v in zip(keys, values): - new_kwargs[k] = v - return new_kwargs - - -def parse_list(config, key, dtype=int): - """Parse a list of values for the key if the value is a string. The values are separated by a comma. - Modifies the config in place. - """ - if key in config: - if isinstance(config[key], str): - config[key] = list(map(dtype, config[key].split(','))) - assert isinstance(config[key], list) and all([isinstance(e, dtype) for e in config[key]] - ), f"{key} should be a list of values dtype {dtype}. Given {config[key]} of type {type(config[key])} with values of type {[type(e) for e in config[key]]}." - - -def get_model_config(model_name, model_version=None): - """Find and parse the .json config file for the model. - - Args: - model_name (str): name of the model. The config file should be named config_{model_name}[_{model_version}].json under the models/{model_name} directory. - model_version (str, optional): Specific config version. If specified config_{model_name}_{model_version}.json is searched for and used. Otherwise config_{model_name}.json is used. Defaults to None. - - Returns: - easydict: the config dictionary for the model. - """ - config_fname = f"config_{model_name}_{model_version}.json" if model_version is not None else f"config_{model_name}.json" - config_file = os.path.join(ROOT, "models", model_name, config_fname) - if not os.path.exists(config_file): - return None - - with open(config_file, "r") as f: - config = edict(json.load(f)) - - # handle dictionary inheritance - # only training config is supported for inheritance - if "inherit" in config.train and config.train.inherit is not None: - inherit_config = get_model_config(config.train["inherit"]).train - for key, value in inherit_config.items(): - if key not in config.train: - config.train[key] = value - return edict(config) - - -def update_model_config(config, mode, model_name, model_version=None, strict=False): - model_config = get_model_config(model_name, model_version) - if model_config is not None: - config = {**config, ** - flatten({**model_config.model, **model_config[mode]})} - elif strict: - raise ValueError(f"Config file for model {model_name} not found.") - return config - - -def check_choices(name, value, choices): - # return # No checks in dev branch - if value not in choices: - raise ValueError(f"{name} {value} not in supported choices {choices}") - - -KEYS_TYPE_BOOL = ["use_amp", "distributed", "use_shared_dict", "same_lr", "aug", "three_phase", - "prefetch", "cycle_momentum"] # Casting is not necessary as their int casted values in config are 0 or 1 - - -def get_config(model_name, mode='train', dataset=None, **overwrite_kwargs): - """Main entry point to get the config for the model. - - Args: - model_name (str): name of the desired model. - mode (str, optional): "train" or "infer". Defaults to 'train'. - dataset (str, optional): If specified, the corresponding dataset configuration is loaded as well. Defaults to None. - - Keyword Args: key-value pairs of arguments to overwrite the default config. - - The order of precedence for overwriting the config is (Higher precedence first): - # 1. overwrite_kwargs - # 2. "config_version": Config file version if specified in overwrite_kwargs. The corresponding config loaded is config_{model_name}_{config_version}.json - # 3. "version_name": Default Model version specific config specified in overwrite_kwargs. The corresponding config loaded is config_{model_name}_{version_name}.json - # 4. common_config: Default config for all models specified in COMMON_CONFIG - - Returns: - easydict: The config dictionary for the model. - """ - - - check_choices("Model", model_name, ["zoedepth", "zoedepth_nk"]) - check_choices("Mode", mode, ["train", "infer", "eval"]) - if mode == "train": - check_choices("Dataset", dataset, ["nyu", "kitti", "mix", None]) - - config = flatten({**COMMON_CONFIG, **COMMON_TRAINING_CONFIG}) - config = update_model_config(config, mode, model_name) - - # update with model version specific config - version_name = overwrite_kwargs.get("version_name", config["version_name"]) - config = update_model_config(config, mode, model_name, version_name) - - # update with config version if specified - config_version = overwrite_kwargs.get("config_version", None) - if config_version is not None: - print("Overwriting config with config_version", config_version) - config = update_model_config(config, mode, model_name, config_version) - - # update with overwrite_kwargs - # Combined args are useful for hyperparameter search - overwrite_kwargs = split_combined_args(overwrite_kwargs) - config = {**config, **overwrite_kwargs} - - # Casting to bool # TODO: Not necessary. Remove and test - for key in KEYS_TYPE_BOOL: - if key in config: - config[key] = bool(config[key]) - - # Model specific post processing of config - parse_list(config, "n_attractors") - - # adjust n_bins for each bin configuration if bin_conf is given and n_bins is passed in overwrite_kwargs - if 'bin_conf' in config and 'n_bins' in overwrite_kwargs: - bin_conf = config['bin_conf'] # list of dicts - n_bins = overwrite_kwargs['n_bins'] - new_bin_conf = [] - for conf in bin_conf: - conf['n_bins'] = n_bins - new_bin_conf.append(conf) - config['bin_conf'] = new_bin_conf - - if mode == "train": - orig_dataset = dataset - if dataset == "mix": - dataset = 'nyu' # Use nyu as default for mix. Dataset config is changed accordingly while loading the dataloader - if dataset is not None: - config['project'] = f"MonoDepth3-{orig_dataset}" # Set project for wandb - - if dataset is not None: - config['dataset'] = dataset - config = {**DATASETS_CONFIG[dataset], **config} - - - config['model'] = model_name - typed_config = {k: infer_type(v) for k, v in config.items()} - # add hostname to config - config['hostname'] = platform.node() - return edict(typed_config) - - -def change_dataset(config, new_dataset): - config.update(DATASETS_CONFIG[new_dataset]) - return config diff --git a/spaces/cymic/Talking_Head_Anime_3/app.py b/spaces/cymic/Talking_Head_Anime_3/app.py deleted file mode 100644 index a699bc5b3c2e987102ca93e0ee28d601e0a93d02..0000000000000000000000000000000000000000 --- a/spaces/cymic/Talking_Head_Anime_3/app.py +++ /dev/null @@ -1,7 +0,0 @@ -import gradio as gr - -def greet(name): - return "Hello " + name + "!!" - -iface = gr.Interface(fn=greet, inputs="text", outputs="text") -iface.launch() \ No newline at end of file diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/cycler.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/cycler.py deleted file mode 100644 index f86b68de64b8066b98d8fa2d92bf5983ea582237..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/cycler.py +++ /dev/null @@ -1,501 +0,0 @@ -""" -Cycler -====== - -Cycling through combinations of values, producing dictionaries. - -You can add cyclers:: - - from cycler import cycler - cc = (cycler(color=list('rgb')) + - cycler(linestyle=['-', '--', '-.'])) - for d in cc: - print(d) - -Results in:: - - {'color': 'r', 'linestyle': '-'} - {'color': 'g', 'linestyle': '--'} - {'color': 'b', 'linestyle': '-.'} - - -You can multiply cyclers:: - - from cycler import cycler - cc = (cycler(color=list('rgb')) * - cycler(linestyle=['-', '--', '-.'])) - for d in cc: - print(d) - -Results in:: - - {'color': 'r', 'linestyle': '-'} - {'color': 'r', 'linestyle': '--'} - {'color': 'r', 'linestyle': '-.'} - {'color': 'g', 'linestyle': '-'} - {'color': 'g', 'linestyle': '--'} - {'color': 'g', 'linestyle': '-.'} - {'color': 'b', 'linestyle': '-'} - {'color': 'b', 'linestyle': '--'} - {'color': 'b', 'linestyle': '-.'} -""" - - -import copy -from functools import reduce -from itertools import product, cycle -from operator import mul, add - -__version__ = '0.10.0' - - -def _process_keys(left, right): - """ - Helper function to compose cycler keys. - - Parameters - ---------- - left, right : iterable of dictionaries or None - The cyclers to be composed. - - Returns - ------- - keys : set - The keys in the composition of the two cyclers. - """ - l_peek = next(iter(left)) if left is not None else {} - r_peek = next(iter(right)) if right is not None else {} - l_key = set(l_peek.keys()) - r_key = set(r_peek.keys()) - if l_key & r_key: - raise ValueError("Can not compose overlapping cycles") - return l_key | r_key - - -def concat(left, right): - r""" - Concatenate `Cycler`\s, as if chained using `itertools.chain`. - - The keys must match exactly. - - Examples - -------- - >>> num = cycler('a', range(3)) - >>> let = cycler('a', 'abc') - >>> num.concat(let) - cycler('a', [0, 1, 2, 'a', 'b', 'c']) - - Returns - ------- - `Cycler` - The concatenated cycler. - """ - if left.keys != right.keys: - raise ValueError("Keys do not match:\n" - "\tIntersection: {both!r}\n" - "\tDisjoint: {just_one!r}".format( - both=left.keys & right.keys, - just_one=left.keys ^ right.keys)) - _l = left.by_key() - _r = right.by_key() - return reduce(add, (_cycler(k, _l[k] + _r[k]) for k in left.keys)) - - -class Cycler: - """ - Composable cycles. - - This class has compositions methods: - - ``+`` - for 'inner' products (zip) - - ``+=`` - in-place ``+`` - - ``*`` - for outer products (`itertools.product`) and integer multiplication - - ``*=`` - in-place ``*`` - - and supports basic slicing via ``[]``. - - Parameters - ---------- - left, right : Cycler or None - The 'left' and 'right' cyclers. - op : func or None - Function which composes the 'left' and 'right' cyclers. - """ - - def __call__(self): - return cycle(self) - - def __init__(self, left, right=None, op=None): - """ - Semi-private init. - - Do not use this directly, use `cycler` function instead. - """ - if isinstance(left, Cycler): - self._left = Cycler(left._left, left._right, left._op) - elif left is not None: - # Need to copy the dictionary or else that will be a residual - # mutable that could lead to strange errors - self._left = [copy.copy(v) for v in left] - else: - self._left = None - - if isinstance(right, Cycler): - self._right = Cycler(right._left, right._right, right._op) - elif right is not None: - # Need to copy the dictionary or else that will be a residual - # mutable that could lead to strange errors - self._right = [copy.copy(v) for v in right] - else: - self._right = None - - self._keys = _process_keys(self._left, self._right) - self._op = op - - def __contains__(self, k): - return k in self._keys - - @property - def keys(self): - """The keys this Cycler knows about.""" - return set(self._keys) - - def change_key(self, old, new): - """ - Change a key in this cycler to a new name. - Modification is performed in-place. - - Does nothing if the old key is the same as the new key. - Raises a ValueError if the new key is already a key. - Raises a KeyError if the old key isn't a key. - """ - if old == new: - return - if new in self._keys: - raise ValueError( - "Can't replace {old} with {new}, {new} is already a key" - .format(old=old, new=new) - ) - if old not in self._keys: - raise KeyError("Can't replace {old} with {new}, {old} is not a key" - .format(old=old, new=new)) - - self._keys.remove(old) - self._keys.add(new) - - if self._right is not None and old in self._right.keys: - self._right.change_key(old, new) - - # self._left should always be non-None - # if self._keys is non-empty. - elif isinstance(self._left, Cycler): - self._left.change_key(old, new) - else: - # It should be completely safe at this point to - # assume that the old key can be found in each - # iteration. - self._left = [{new: entry[old]} for entry in self._left] - - @classmethod - def _from_iter(cls, label, itr): - """ - Class method to create 'base' Cycler objects - that do not have a 'right' or 'op' and for which - the 'left' object is not another Cycler. - - Parameters - ---------- - label : str - The property key. - - itr : iterable - Finite length iterable of the property values. - - Returns - ------- - `Cycler` - New 'base' cycler. - """ - ret = cls(None) - ret._left = list({label: v} for v in itr) - ret._keys = {label} - return ret - - def __getitem__(self, key): - # TODO : maybe add numpy style fancy slicing - if isinstance(key, slice): - trans = self.by_key() - return reduce(add, (_cycler(k, v[key]) for k, v in trans.items())) - else: - raise ValueError("Can only use slices with Cycler.__getitem__") - - def __iter__(self): - if self._right is None: - for left in self._left: - yield dict(left) - else: - for a, b in self._op(self._left, self._right): - out = {} - out.update(a) - out.update(b) - yield out - - def __add__(self, other): - """ - Pair-wise combine two equal length cyclers (zip). - - Parameters - ---------- - other : Cycler - """ - if len(self) != len(other): - raise ValueError("Can only add equal length cycles, " - f"not {len(self)} and {len(other)}") - return Cycler(self, other, zip) - - def __mul__(self, other): - """ - Outer product of two cyclers (`itertools.product`) or integer - multiplication. - - Parameters - ---------- - other : Cycler or int - """ - if isinstance(other, Cycler): - return Cycler(self, other, product) - elif isinstance(other, int): - trans = self.by_key() - return reduce(add, (_cycler(k, v*other) for k, v in trans.items())) - else: - return NotImplemented - - def __rmul__(self, other): - return self * other - - def __len__(self): - op_dict = {zip: min, product: mul} - if self._right is None: - return len(self._left) - l_len = len(self._left) - r_len = len(self._right) - return op_dict[self._op](l_len, r_len) - - def __iadd__(self, other): - """ - In-place pair-wise combine two equal length cyclers (zip). - - Parameters - ---------- - other : Cycler - """ - if not isinstance(other, Cycler): - raise TypeError("Cannot += with a non-Cycler object") - # True shallow copy of self is fine since this is in-place - old_self = copy.copy(self) - self._keys = _process_keys(old_self, other) - self._left = old_self - self._op = zip - self._right = Cycler(other._left, other._right, other._op) - return self - - def __imul__(self, other): - """ - In-place outer product of two cyclers (`itertools.product`). - - Parameters - ---------- - other : Cycler - """ - if not isinstance(other, Cycler): - raise TypeError("Cannot *= with a non-Cycler object") - # True shallow copy of self is fine since this is in-place - old_self = copy.copy(self) - self._keys = _process_keys(old_self, other) - self._left = old_self - self._op = product - self._right = Cycler(other._left, other._right, other._op) - return self - - def __eq__(self, other): - if len(self) != len(other): - return False - if self.keys ^ other.keys: - return False - return all(a == b for a, b in zip(self, other)) - - def __ne__(self, other): - return not (self == other) - - __hash__ = None - - def __repr__(self): - op_map = {zip: '+', product: '*'} - if self._right is None: - lab = self.keys.pop() - itr = list(v[lab] for v in self) - return f"cycler({lab!r}, {itr!r})" - else: - op = op_map.get(self._op, '?') - msg = "({left!r} {op} {right!r})" - return msg.format(left=self._left, op=op, right=self._right) - - def _repr_html_(self): - # an table showing the value of each key through a full cycle - output = "" - sorted_keys = sorted(self.keys, key=repr) - for key in sorted_keys: - output += f"" - for d in iter(self): - output += "" - for k in sorted_keys: - output += f"" - output += "" - output += "
        {key!r}
        {d[k]!r}
        " - return output - - def by_key(self): - """ - Values by key. - - This returns the transposed values of the cycler. Iterating - over a `Cycler` yields dicts with a single value for each key, - this method returns a `dict` of `list` which are the values - for the given key. - - The returned value can be used to create an equivalent `Cycler` - using only `+`. - - Returns - ------- - transpose : dict - dict of lists of the values for each key. - """ - - # TODO : sort out if this is a bottle neck, if there is a better way - # and if we care. - - keys = self.keys - out = {k: list() for k in keys} - - for d in self: - for k in keys: - out[k].append(d[k]) - return out - - # for back compatibility - _transpose = by_key - - def simplify(self): - """ - Simplify the cycler into a sum (but no products) of cyclers. - - Returns - ------- - simple : Cycler - """ - # TODO: sort out if it is worth the effort to make sure this is - # balanced. Currently it is is - # (((a + b) + c) + d) vs - # ((a + b) + (c + d)) - # I would believe that there is some performance implications - trans = self.by_key() - return reduce(add, (_cycler(k, v) for k, v in trans.items())) - - concat = concat - - -def cycler(*args, **kwargs): - """ - Create a new `Cycler` object from a single positional argument, - a pair of positional arguments, or the combination of keyword arguments. - - cycler(arg) - cycler(label1=itr1[, label2=iter2[, ...]]) - cycler(label, itr) - - Form 1 simply copies a given `Cycler` object. - - Form 2 composes a `Cycler` as an inner product of the - pairs of keyword arguments. In other words, all of the - iterables are cycled simultaneously, as if through zip(). - - Form 3 creates a `Cycler` from a label and an iterable. - This is useful for when the label cannot be a keyword argument - (e.g., an integer or a name that has a space in it). - - Parameters - ---------- - arg : Cycler - Copy constructor for Cycler (does a shallow copy of iterables). - label : name - The property key. In the 2-arg form of the function, - the label can be any hashable object. In the keyword argument - form of the function, it must be a valid python identifier. - itr : iterable - Finite length iterable of the property values. - Can be a single-property `Cycler` that would - be like a key change, but as a shallow copy. - - Returns - ------- - cycler : Cycler - New `Cycler` for the given property - - """ - if args and kwargs: - raise TypeError("cyl() can only accept positional OR keyword " - "arguments -- not both.") - - if len(args) == 1: - if not isinstance(args[0], Cycler): - raise TypeError("If only one positional argument given, it must " - "be a Cycler instance.") - return Cycler(args[0]) - elif len(args) == 2: - return _cycler(*args) - elif len(args) > 2: - raise TypeError("Only a single Cycler can be accepted as the lone " - "positional argument. Use keyword arguments instead.") - - if kwargs: - return reduce(add, (_cycler(k, v) for k, v in kwargs.items())) - - raise TypeError("Must have at least a positional OR keyword arguments") - - -def _cycler(label, itr): - """ - Create a new `Cycler` object from a property name and iterable of values. - - Parameters - ---------- - label : hashable - The property key. - itr : iterable - Finite length iterable of the property values. - - Returns - ------- - cycler : Cycler - New `Cycler` for the given property - """ - if isinstance(itr, Cycler): - keys = itr.keys - if len(keys) != 1: - msg = "Can not create Cycler from a multi-property Cycler" - raise ValueError(msg) - - lab = keys.pop() - # Doesn't need to be a new list because - # _from_iter() will be creating that new list anyway. - itr = (v[lab] for v in itr) - - return Cycler._from_iter(label, itr) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-cd311153.css b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-cd311153.css deleted file mode 100644 index ddd19b13c94adc9cef083883af708a15f2eb65f0..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-cd311153.css +++ /dev/null @@ -1 +0,0 @@ -input.svelte-56zyyb{display:block;position:relative;background:var(--background-fill-primary);line-height:var(--line-sm)} diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-540ff1f4.js b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-540ff1f4.js deleted file mode 100644 index bfeb98d598471882741c6537d1a964b8524d9afa..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-540ff1f4.js +++ /dev/null @@ -1,6 +0,0 @@ -import{S as E,e as L,s as M,f as R,g as p,h as _,j as v,n as w,k as m,m as j,o as H,Y,r as C,u as b,v as B,w as d,t as h,x as T,P as A,I as V,Z as K,p as D,F as O,G as S,H as N,ai as W,y as X,z as x,C as ee,V as te,ae as le,Q as ne,R as ie}from"./index-39fce9e2.js";import{f as se,B as oe}from"./Button-79f6e3bf.js";import{C as re,a as ce}from"./Copy-77b3f70c.js";import{E as ae}from"./Empty-16d6169a.js";import{B as fe}from"./BlockLabel-b1428685.js";function ue(a){let e,t;return{c(){e=R("svg"),t=R("path"),p(t,"fill","currentColor"),p(t,"d","M5 3h2v2H5v5a2 2 0 0 1-2 2a2 2 0 0 1 2 2v5h2v2H5c-1.07-.27-2-.9-2-2v-4a2 2 0 0 0-2-2H0v-2h1a2 2 0 0 0 2-2V5a2 2 0 0 1 2-2m14 0a2 2 0 0 1 2 2v4a2 2 0 0 0 2 2h1v2h-1a2 2 0 0 0-2 2v4a2 2 0 0 1-2 2h-2v-2h2v-5a2 2 0 0 1 2-2a2 2 0 0 1-2-2V5h-2V3h2m-7 12a1 1 0 0 1 1 1a1 1 0 0 1-1 1a1 1 0 0 1-1-1a1 1 0 0 1 1-1m-4 0a1 1 0 0 1 1 1a1 1 0 0 1-1 1a1 1 0 0 1-1-1a1 1 0 0 1 1-1m8 0a1 1 0 0 1 1 1a1 1 0 0 1-1 1a1 1 0 0 1-1-1a1 1 0 0 1 1-1Z"),p(e,"xmlns","http://www.w3.org/2000/svg"),p(e,"xmlns:xlink","http://www.w3.org/1999/xlink"),p(e,"aria-hidden","true"),p(e,"role","img"),p(e,"class","iconify iconify--mdi"),p(e,"width","100%"),p(e,"height","100%"),p(e,"preserveAspectRatio","xMidYMid meet"),p(e,"viewBox","0 0 24 24")},m(l,i){_(l,e,i),v(e,t)},p:w,i:w,o:w,d(l){l&&m(e)}}}let U=class extends E{constructor(e){super(),L(this,e,null,ue,M,{})}};function Z(a,e,t){const l=a.slice();return l[5]=e[t],l[7]=t,l}function q(a,e,t){const l=a.slice();return l[5]=e[t],l[7]=t,l}function _e(a){let e,t;return{c(){e=j("div"),t=h(a[1]),p(e,"class","json-item svelte-1kspdo")},m(l,i){_(l,e,i),v(e,t)},p(l,i){i&2&&T(t,l[1])},i:w,o:w,d(l){l&&m(e)}}}function me(a){let e,t;return{c(){e=j("div"),t=h(a[1]),p(e,"class","json-item number svelte-1kspdo")},m(l,i){_(l,e,i),v(e,t)},p(l,i){i&2&&T(t,l[1])},i:w,o:w,d(l){l&&m(e)}}}function de(a){let e,t=a[1].toLocaleString()+"",l;return{c(){e=j("div"),l=h(t),p(e,"class","json-item bool svelte-1kspdo")},m(i,r){_(i,e,r),v(e,l)},p(i,r){r&2&&t!==(t=i[1].toLocaleString()+"")&&T(l,t)},i:w,o:w,d(i){i&&m(e)}}}function be(a){let e,t,l,i;return{c(){e=j("div"),t=h('"'),l=h(a[1]),i=h('"'),p(e,"class","json-item string svelte-1kspdo")},m(r,o){_(r,e,o),v(e,t),v(e,l),v(e,i)},p(r,o){o&2&&T(l,r[1])},i:w,o:w,d(r){r&&m(e)}}}function ke(a){let e;return{c(){e=j("div"),e.textContent="null",p(e,"class","json-item null svelte-1kspdo")},m(t,l){_(t,e,l)},p:w,i:w,o:w,d(t){t&&m(e)}}}function pe(a){let e,t,l,i;const r=[ge,ve],o=[];function f(n,s){return n[0]?0:1}return e=f(a),t=o[e]=r[e](a),{c(){t.c(),l=A()},m(n,s){o[e].m(n,s),_(n,l,s),i=!0},p(n,s){let c=e;e=f(n),e===c?o[e].p(n,s):(C(),b(o[c],1,1,()=>{o[c]=null}),B(),t=o[e],t?t.p(n,s):(t=o[e]=r[e](n),t.c()),d(t,1),t.m(l.parentNode,l))},i(n){i||(d(t),i=!0)},o(n){b(t),i=!1},d(n){n&&m(l),o[e].d(n)}}}function he(a){let e,t,l,i;const r=[je,we],o=[];function f(n,s){return n[0]?0:1}return e=f(a),t=o[e]=r[e](a),{c(){t.c(),l=A()},m(n,s){o[e].m(n,s),_(n,l,s),i=!0},p(n,s){let c=e;e=f(n),e===c?o[e].p(n,s):(C(),b(o[c],1,1,()=>{o[c]=null}),B(),t=o[e],t?t.p(n,s):(t=o[e]=r[e](n),t.c()),d(t,1),t.m(l.parentNode,l))},i(n){i||(d(t),i=!0)},o(n){b(t),i=!1},d(n){n&&m(l),o[e].d(n)}}}function ve(a){let e,t,l,i,r=V(Object.entries(a[1])),o=[];for(let n=0;nb(o[n],1,1,()=>{o[n]=null});return{c(){e=h(`{ - `),t=j("div");for(let n=0;nb(o[n],1,1,()=>{o[n]=null});return{c(){e=h(`[ - `),t=j("div");for(let n=0;n{n[y]=null}),B(),r=n[i],r?r.p(c,u):(r=n[i]=f[i](c),r.c()),d(r,1),r.m(l,null))},i(c){o||(d(r),o=!0)},o(c){b(r),o=!1},d(c){c&&(m(e),m(t),m(l)),n[i].d()}}}function Oe(a,e,t){let{value:l}=e,{depth:i}=e,{collapsed:r=i>4}=e;const o=()=>{t(0,r=!1)},f=()=>{t(0,r=!1)};return a.$$set=n=>{"value"in n&&t(1,l=n.value),"depth"in n&&t(2,i=n.depth),"collapsed"in n&&t(0,r=n.collapsed)},[r,l,i,o,f]}class I extends E{constructor(e){super(),L(this,e,Oe,ye,M,{value:1,depth:2,collapsed:0})}}function Se(a){let e,t;return e=new ae({props:{$$slots:{default:[Je]},$$scope:{ctx:a}}}),{c(){O(e.$$.fragment)},m(l,i){S(e,l,i),t=!0},p(l,i){const r={};i&32&&(r.$$scope={dirty:i,ctx:l}),e.$set(r)},i(l){t||(d(e.$$.fragment,l),t=!0)},o(l){b(e.$$.fragment,l),t=!1},d(l){N(e,l)}}}function Ne(a){let e,t,l,i,r,o,f,n,s;const c=[Be,Ce],u=[];function y(g,J){return g[1]?0:1}return t=y(a),l=u[t]=c[t](a),o=new I({props:{value:a[0],depth:0}}),{c(){e=j("button"),l.c(),i=H(),r=j("div"),O(o.$$.fragment),p(e,"class","svelte-1trjy9a"),p(r,"class","json-holder svelte-1trjy9a")},m(g,J){_(g,e,J),u[t].m(e,null),_(g,i,J),_(g,r,J),S(o,r,null),f=!0,n||(s=D(e,"click",a[2]),n=!0)},p(g,J){let k=t;t=y(g),t!==k&&(C(),b(u[k],1,1,()=>{u[k]=null}),B(),l=u[t],l||(l=u[t]=c[t](g),l.c()),d(l,1),l.m(e,null));const P={};J&1&&(P.value=g[0]),o.$set(P)},i(g){f||(d(l),d(o.$$.fragment,g),f=!0)},o(g){b(l),b(o.$$.fragment,g),f=!1},d(g){g&&(m(e),m(i),m(r)),u[t].d(),N(o),n=!1,s()}}}function Je(a){let e,t;return e=new U({}),{c(){O(e.$$.fragment)},m(l,i){S(e,l,i),t=!0},i(l){t||(d(e.$$.fragment,l),t=!0)},o(l){b(e.$$.fragment,l),t=!1},d(l){N(e,l)}}}function Ce(a){let e,t,l;return t=new re({}),{c(){e=j("span"),O(t.$$.fragment),p(e,"class","copy-text")},m(i,r){_(i,e,r),S(t,e,null),l=!0},i(i){l||(d(t.$$.fragment,i),l=!0)},o(i){b(t.$$.fragment,i),l=!1},d(i){i&&m(e),N(t)}}}function Be(a){let e,t,l,i;return t=new ce({}),{c(){e=j("span"),O(t.$$.fragment)},m(r,o){_(r,e,o),S(t,e,null),i=!0},i(r){i||(d(t.$$.fragment,r),r&&(l||X(()=>{l=x(e,se,{duration:300}),l.start()})),i=!0)},o(r){b(t.$$.fragment,r),i=!1},d(r){r&&m(e),N(t)}}}function He(a){let e,t,l,i,r;const o=[Ne,Se],f=[];function n(s,c){return c&1&&(e=null),e==null&&(e=!!(s[0]&&s[0]!=='""'&&!Te(s[0]))),e?0:1}return t=n(a,-1),l=f[t]=o[t](a),{c(){l.c(),i=A()},m(s,c){f[t].m(s,c),_(s,i,c),r=!0},p(s,[c]){let u=t;t=n(s,c),t===u?f[t].p(s,c):(C(),b(f[u],1,1,()=>{f[u]=null}),B(),l=f[t],l?l.p(s,c):(l=f[t]=o[t](s),l.c()),d(l,1),l.m(i.parentNode,i))},i(s){r||(d(l),r=!0)},o(s){b(l),r=!1},d(s){s&&m(i),f[t].d(s)}}}function Te(a){return a&&Object.keys(a).length===0&&Object.getPrototypeOf(a)===Object.prototype}function Ve(a,e,t){let{value:l={}}=e,i=!1,r;function o(){t(1,i=!0),r&&clearTimeout(r),r=setTimeout(()=>{t(1,i=!1)},1e3)}async function f(){"clipboard"in navigator&&(await navigator.clipboard.writeText(JSON.stringify(l,null,2)),o())}return W(()=>{r&&clearTimeout(r)}),a.$$set=n=>{"value"in n&&t(0,l=n.value)},[l,i,f]}class Ee extends E{constructor(e){super(),L(this,e,Ve,He,M,{value:0})}}function $(a){let e,t;return e=new fe({props:{Icon:U,show_label:a[6],label:a[5],float:!1,disable:a[7]===!1}}),{c(){O(e.$$.fragment)},m(l,i){S(e,l,i),t=!0},p(l,i){const r={};i&64&&(r.show_label=l[6]),i&32&&(r.label=l[5]),i&128&&(r.disable=l[7]===!1),e.$set(r)},i(l){t||(d(e.$$.fragment,l),t=!0)},o(l){b(e.$$.fragment,l),t=!1},d(l){N(e,l)}}}function Le(a){let e,t,l,i,r,o=a[5]&&$(a);const f=[a[4]];let n={};for(let s=0;s{o=null}),B());const u=c&16?ne(f,[ie(s[4])]):{};t.$set(u);const y={};c&8&&(y.value=s[3]),i.$set(y)},i(s){r||(d(o),d(t.$$.fragment,s),d(i.$$.fragment,s),r=!0)},o(s){b(o),b(t.$$.fragment,s),b(i.$$.fragment,s),r=!1},d(s){s&&(m(e),m(l)),o&&o.d(s),N(t,s),N(i,s)}}}function Me(a){let e,t;return e=new oe({props:{visible:a[2],test_id:"json",elem_id:a[0],elem_classes:a[1],container:a[7],scale:a[8],min_width:a[9],padding:!1,$$slots:{default:[Le]},$$scope:{ctx:a}}}),{c(){O(e.$$.fragment)},m(l,i){S(e,l,i),t=!0},p(l,[i]){const r={};i&4&&(r.visible=l[2]),i&1&&(r.elem_id=l[0]),i&2&&(r.elem_classes=l[1]),i&128&&(r.container=l[7]),i&256&&(r.scale=l[8]),i&512&&(r.min_width=l[9]),i&4344&&(r.$$scope={dirty:i,ctx:l}),e.$set(r)},i(l){t||(d(e.$$.fragment,l),t=!0)},o(l){b(e.$$.fragment,l),t=!1},d(l){N(e,l)}}}function Ae(a,e,t){let{elem_id:l=""}=e,{elem_classes:i=[]}=e,{visible:r=!0}=e,{value:o}=e,f,{loading_status:n}=e,{label:s}=e,{show_label:c}=e,{container:u=!0}=e,{scale:y=null}=e,{min_width:g=void 0}=e;const J=ee();return a.$$set=k=>{"elem_id"in k&&t(0,l=k.elem_id),"elem_classes"in k&&t(1,i=k.elem_classes),"visible"in k&&t(2,r=k.visible),"value"in k&&t(3,o=k.value),"loading_status"in k&&t(4,n=k.loading_status),"label"in k&&t(5,s=k.label),"show_label"in k&&t(6,c=k.show_label),"container"in k&&t(7,u=k.container),"scale"in k&&t(8,y=k.scale),"min_width"in k&&t(9,g=k.min_width)},a.$$.update=()=>{a.$$.dirty&1032&&o!==f&&(t(10,f=o),J("change"))},[l,i,r,o,n,s,c,u,y,g,f]}class De extends E{constructor(e){super(),L(this,e,Ae,Me,M,{elem_id:0,elem_classes:1,visible:2,value:3,loading_status:4,label:5,show_label:6,container:7,scale:8,min_width:9})}}const ze=De,Fe=["static"];export{ze as Component,Fe as modes}; -//# sourceMappingURL=index-540ff1f4.js.map diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py deleted file mode 100644 index 850a0a4670e258378cc896475d7b02578025866e..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py +++ /dev/null @@ -1,736 +0,0 @@ -import inspect -import warnings -from typing import Callable, List, Optional, Union - -import numpy as np -import torch -from packaging import version -from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer - -from ...configuration_utils import FrozenDict -from ...models import AutoencoderKL, UNet2DConditionModel -from ...schedulers import KarrasDiffusionSchedulers -from ...utils import deprecate, is_accelerate_available, logging, randn_tensor -from ..pipeline_utils import DiffusionPipeline -from . import StableDiffusionSafePipelineOutput -from .safety_checker import SafeStableDiffusionSafetyChecker - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -class StableDiffusionPipelineSafe(DiffusionPipeline): - r""" - Pipeline for text-to-image generation using Safe Latent Diffusion. - - The implementation is based on the [`StableDiffusionPipeline`] - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offensive or harmful. - Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details. - feature_extractor ([`CLIPImageProcessor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - - _optional_components = ["safety_checker", "feature_extractor"] - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: KarrasDiffusionSchedulers, - safety_checker: SafeStableDiffusionSafetyChecker, - feature_extractor: CLIPImageProcessor, - requires_safety_checker: bool = True, - ): - super().__init__() - safety_concept: Optional[str] = ( - "an image showing hate, harassment, violence, suffering, humiliation, harm, suicide, sexual, nudity," - " bodily fluids, blood, obscene gestures, illegal activity, drug use, theft, vandalism, weapons, child" - " abuse, brutality, cruelty" - ) - - if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`" - f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure " - "to update the config accordingly as leaving `steps_offset` might led to incorrect results" - " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub," - " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`" - " file" - ) - deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["steps_offset"] = 1 - scheduler._internal_dict = FrozenDict(new_config) - - if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True: - deprecation_message = ( - f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`." - " `clip_sample` should be set to False in the configuration file. Please make sure to update the" - " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in" - " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very" - " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file" - ) - deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(scheduler.config) - new_config["clip_sample"] = False - scheduler._internal_dict = FrozenDict(new_config) - - if safety_checker is None and requires_safety_checker: - logger.warning( - f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" - " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" - " results in services or applications open to the public. Both the diffusers team and Hugging Face" - " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" - " it only for use-cases that involve analyzing network behavior or auditing its results. For more" - " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." - ) - - if safety_checker is not None and feature_extractor is None: - raise ValueError( - "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" - " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." - ) - - is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse( - version.parse(unet.config._diffusers_version).base_version - ) < version.parse("0.9.0.dev0") - is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64 - if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64: - deprecation_message = ( - "The configuration file of the unet has set the default `sample_size` to smaller than" - " 64 which seems highly unlikely .If you're checkpoint is a fine-tuned version of any of the" - " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-" - " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5" - " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the" - " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`" - " in the config might lead to incorrect results in future versions. If you have downloaded this" - " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for" - " the `unet/config.json` file" - ) - deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False) - new_config = dict(unet.config) - new_config["sample_size"] = 64 - unet._internal_dict = FrozenDict(new_config) - - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - self._safety_text_concept = safety_concept - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - self.register_to_config(requires_safety_checker=requires_safety_checker) - - @property - def safety_concept(self): - r""" - Getter method for the safety concept used with SLD - - Returns: - `str`: The text describing the safety concept - """ - return self._safety_text_concept - - @safety_concept.setter - def safety_concept(self, concept): - r""" - Setter method for the safety concept used with SLD - - Args: - concept (`str`): - The text of the new safety concept - """ - self._safety_text_concept = concept - - def enable_sequential_cpu_offload(self): - r""" - Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, - text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a - `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called. - """ - if is_accelerate_available(): - from accelerate import cpu_offload - else: - raise ImportError("Please install accelerate via `pip install accelerate`") - - device = torch.device("cuda") - - for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae, self.safety_checker]: - if cpu_offloaded_model is not None: - cpu_offload(cpu_offloaded_model, device) - - @property - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device - def _execution_device(self): - r""" - Returns the device on which the pipeline's models will be executed. After calling - `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module - hooks. - """ - if not hasattr(self.unet, "_hf_hook"): - return self.device - for module in self.unet.modules(): - if ( - hasattr(module, "_hf_hook") - and hasattr(module._hf_hook, "execution_device") - and module._hf_hook.execution_device is not None - ): - return torch.device(module._hf_hook.execution_device) - return self.device - - def _encode_prompt( - self, - prompt, - device, - num_images_per_prompt, - do_classifier_free_guidance, - negative_prompt, - enable_safety_guidance, - ): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `List[str]`): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - """ - batch_size = len(prompt) if isinstance(prompt, list) else 1 - - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="max_length", return_tensors="pt").input_ids - - if not torch.equal(text_input_ids, untruncated_ids): - removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = text_inputs.attention_mask.to(device) - else: - attention_mask = None - - prompt_embeds = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - prompt_embeds = prompt_embeds[0] - - # duplicate text embeddings for each generation per prompt, using mps friendly method - bs_embed, seq_len, _ = prompt_embeds.shape - prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1) - prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - max_length = text_input_ids.shape[-1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = uncond_input.attention_mask.to(device) - else: - attention_mask = None - - negative_prompt_embeds = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - negative_prompt_embeds = negative_prompt_embeds[0] - - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = negative_prompt_embeds.shape[1] - negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1) - negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1) - - # Encode the safety concept text - if enable_safety_guidance: - safety_concept_input = self.tokenizer( - [self._safety_text_concept], - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - safety_embeddings = self.text_encoder(safety_concept_input.input_ids.to(self.device))[0] - - # duplicate safety embeddings for each generation per prompt, using mps friendly method - seq_len = safety_embeddings.shape[1] - safety_embeddings = safety_embeddings.repeat(batch_size, num_images_per_prompt, 1) - safety_embeddings = safety_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance + sld, we need to do three forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing three forward passes - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds, safety_embeddings]) - - else: - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds]) - - return prompt_embeds - - def run_safety_checker(self, image, device, dtype, enable_safety_guidance): - if self.safety_checker is not None: - images = image.copy() - safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device) - image, has_nsfw_concept = self.safety_checker( - images=image, clip_input=safety_checker_input.pixel_values.to(dtype) - ) - flagged_images = np.zeros((2, *image.shape[1:])) - if any(has_nsfw_concept): - logger.warning( - "Potential NSFW content was detected in one or more images. A black image will be returned" - " instead." - f"{'You may look at this images in the `unsafe_images` variable of the output at your own discretion.' if enable_safety_guidance else 'Try again with a different prompt and/or seed.'}" - ) - for idx, has_nsfw_concept in enumerate(has_nsfw_concept): - if has_nsfw_concept: - flagged_images[idx] = images[idx] - image[idx] = np.zeros(image[idx].shape) # black image - else: - has_nsfw_concept = None - flagged_images = None - return image, has_nsfw_concept, flagged_images - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents - def decode_latents(self, latents): - latents = 1 / self.vae.config.scaling_factor * latents - image = self.vae.decode(latents).sample - image = (image / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - return image - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.check_inputs - def check_inputs( - self, - prompt, - height, - width, - callback_steps, - negative_prompt=None, - prompt_embeds=None, - negative_prompt_embeds=None, - ): - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - if prompt is not None and prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to" - " only forward one of the two." - ) - elif prompt is None and prompt_embeds is None: - raise ValueError( - "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined." - ) - elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if negative_prompt is not None and negative_prompt_embeds is not None: - raise ValueError( - f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:" - f" {negative_prompt_embeds}. Please make sure to only forward one of the two." - ) - - if prompt_embeds is not None and negative_prompt_embeds is not None: - if prompt_embeds.shape != negative_prompt_embeds.shape: - raise ValueError( - "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but" - f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`" - f" {negative_prompt_embeds.shape}." - ) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents - def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): - shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor) - if isinstance(generator, list) and len(generator) != batch_size: - raise ValueError( - f"You have passed a list of generators of length {len(generator)}, but requested an effective batch" - f" size of {batch_size}. Make sure the batch size matches the length of the generators." - ) - - if latents is None: - latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - else: - latents = latents.to(device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - return latents - - def perform_safety_guidance( - self, - enable_safety_guidance, - safety_momentum, - noise_guidance, - noise_pred_out, - i, - sld_guidance_scale, - sld_warmup_steps, - sld_threshold, - sld_momentum_scale, - sld_mom_beta, - ): - # Perform SLD guidance - if enable_safety_guidance: - if safety_momentum is None: - safety_momentum = torch.zeros_like(noise_guidance) - noise_pred_text, noise_pred_uncond = noise_pred_out[0], noise_pred_out[1] - noise_pred_safety_concept = noise_pred_out[2] - - # Equation 6 - scale = torch.clamp(torch.abs((noise_pred_text - noise_pred_safety_concept)) * sld_guidance_scale, max=1.0) - - # Equation 6 - safety_concept_scale = torch.where( - (noise_pred_text - noise_pred_safety_concept) >= sld_threshold, torch.zeros_like(scale), scale - ) - - # Equation 4 - noise_guidance_safety = torch.mul((noise_pred_safety_concept - noise_pred_uncond), safety_concept_scale) - - # Equation 7 - noise_guidance_safety = noise_guidance_safety + sld_momentum_scale * safety_momentum - - # Equation 8 - safety_momentum = sld_mom_beta * safety_momentum + (1 - sld_mom_beta) * noise_guidance_safety - - if i >= sld_warmup_steps: # Warmup - # Equation 3 - noise_guidance = noise_guidance - noise_guidance_safety - return noise_guidance, safety_momentum - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]], - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - negative_prompt: Optional[Union[str, List[str]]] = None, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - sld_guidance_scale: Optional[float] = 1000, - sld_warmup_steps: Optional[int] = 10, - sld_threshold: Optional[float] = 0.01, - sld_momentum_scale: Optional[float] = 0.3, - sld_mom_beta: Optional[float] = 0.4, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - sld_guidance_scale (`float`, *optional*, defaults to 1000): - Safe latent guidance as defined in [Safe Latent Diffusion](https://arxiv.org/abs/2211.05105). - `sld_guidance_scale` is defined as sS of Eq. 6. If set to be less than 1, safety guidance will be - disabled. - sld_warmup_steps (`int`, *optional*, defaults to 10): - Number of warmup steps for safety guidance. SLD will only be applied for diffusion steps greater than - `sld_warmup_steps`. `sld_warmup_steps` is defined as `delta` of [Safe Latent - Diffusion](https://arxiv.org/abs/2211.05105). - sld_threshold (`float`, *optional*, defaults to 0.01): - Threshold that separates the hyperplane between appropriate and inappropriate images. `sld_threshold` - is defined as `lamda` of Eq. 5 in [Safe Latent Diffusion](https://arxiv.org/abs/2211.05105). - sld_momentum_scale (`float`, *optional*, defaults to 0.3): - Scale of the SLD momentum to be added to the safety guidance at each diffusion step. If set to 0.0 - momentum will be disabled. Momentum is already built up during warmup, i.e. for diffusion steps smaller - than `sld_warmup_steps`. `sld_momentum_scale` is defined as `sm` of Eq. 7 in [Safe Latent - Diffusion](https://arxiv.org/abs/2211.05105). - sld_mom_beta (`float`, *optional*, defaults to 0.4): - Defines how safety guidance momentum builds up. `sld_mom_beta` indicates how much of the previous - momentum will be kept. Momentum is already built up during warmup, i.e. for diffusion steps smaller - than `sld_warmup_steps`. `sld_mom_beta` is defined as `beta m` of Eq. 8 in [Safe Latent - Diffusion](https://arxiv.org/abs/2211.05105). - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - # 0. Default height and width to unet - height = height or self.unet.config.sample_size * self.vae_scale_factor - width = width or self.unet.config.sample_size * self.vae_scale_factor - - # 1. Check inputs. Raise error if not correct - self.check_inputs(prompt, height, width, callback_steps) - - # 2. Define call parameters - batch_size = 1 if isinstance(prompt, str) else len(prompt) - device = self._execution_device - - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - enable_safety_guidance = sld_guidance_scale > 1.0 and do_classifier_free_guidance - if not enable_safety_guidance: - warnings.warn("Safety checker disabled!") - - # 3. Encode input prompt - prompt_embeds = self._encode_prompt( - prompt, device, num_images_per_prompt, do_classifier_free_guidance, negative_prompt, enable_safety_guidance - ) - - # 4. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # 5. Prepare latent variables - num_channels_latents = self.unet.in_channels - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - prompt_embeds.dtype, - device, - generator, - latents, - ) - - # 6. Prepare extra step kwargs. - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - safety_momentum = None - - num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - # expand the latents if we are doing classifier free guidance - latent_model_input = ( - torch.cat([latents] * (3 if enable_safety_guidance else 2)) - if do_classifier_free_guidance - else latents - ) - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=prompt_embeds).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_out = noise_pred.chunk((3 if enable_safety_guidance else 2)) - noise_pred_uncond, noise_pred_text = noise_pred_out[0], noise_pred_out[1] - - # default classifier free guidance - noise_guidance = noise_pred_text - noise_pred_uncond - - # Perform SLD guidance - if enable_safety_guidance: - if safety_momentum is None: - safety_momentum = torch.zeros_like(noise_guidance) - noise_pred_safety_concept = noise_pred_out[2] - - # Equation 6 - scale = torch.clamp( - torch.abs((noise_pred_text - noise_pred_safety_concept)) * sld_guidance_scale, max=1.0 - ) - - # Equation 6 - safety_concept_scale = torch.where( - (noise_pred_text - noise_pred_safety_concept) >= sld_threshold, - torch.zeros_like(scale), - scale, - ) - - # Equation 4 - noise_guidance_safety = torch.mul( - (noise_pred_safety_concept - noise_pred_uncond), safety_concept_scale - ) - - # Equation 7 - noise_guidance_safety = noise_guidance_safety + sld_momentum_scale * safety_momentum - - # Equation 8 - safety_momentum = sld_mom_beta * safety_momentum + (1 - sld_mom_beta) * noise_guidance_safety - - if i >= sld_warmup_steps: # Warmup - # Equation 3 - noise_guidance = noise_guidance - noise_guidance_safety - - noise_pred = noise_pred_uncond + guidance_scale * noise_guidance - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # 8. Post-processing - image = self.decode_latents(latents) - - # 9. Run safety checker - image, has_nsfw_concept, flagged_images = self.run_safety_checker( - image, device, prompt_embeds.dtype, enable_safety_guidance - ) - - # 10. Convert to PIL - if output_type == "pil": - image = self.numpy_to_pil(image) - if flagged_images is not None: - flagged_images = self.numpy_to_pil(flagged_images) - - if not return_dict: - return ( - image, - has_nsfw_concept, - self._safety_text_concept if enable_safety_guidance else None, - flagged_images, - ) - - return StableDiffusionSafePipelineOutput( - images=image, - nsfw_content_detected=has_nsfw_concept, - applied_safety_concept=self._safety_text_concept if enable_safety_guidance else None, - unsafe_images=flagged_images, - ) diff --git a/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_flax_inpaint.py b/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_flax_inpaint.py deleted file mode 100644 index 432619a79ddd32d288893e3021a14ab6893b370a..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_flax_inpaint.py +++ /dev/null @@ -1,82 +0,0 @@ -# coding=utf-8 -# Copyright 2023 HuggingFace Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import gc -import unittest - -from diffusers import FlaxStableDiffusionInpaintPipeline -from diffusers.utils import is_flax_available, load_image, slow -from diffusers.utils.testing_utils import require_flax - - -if is_flax_available(): - import jax - import jax.numpy as jnp - from flax.jax_utils import replicate - from flax.training.common_utils import shard - - -@slow -@require_flax -class FlaxStableDiffusionInpaintPipelineIntegrationTests(unittest.TestCase): - def tearDown(self): - # clean up the VRAM after each test - super().tearDown() - gc.collect() - - def test_stable_diffusion_inpaint_pipeline(self): - init_image = load_image( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" - "/sd2-inpaint/init_image.png" - ) - mask_image = load_image( - "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-inpaint/mask.png" - ) - - model_id = "xvjiarui/stable-diffusion-2-inpainting" - pipeline, params = FlaxStableDiffusionInpaintPipeline.from_pretrained(model_id, safety_checker=None) - - prompt = "Face of a yellow cat, high resolution, sitting on a park bench" - - prng_seed = jax.random.PRNGKey(0) - num_inference_steps = 50 - - num_samples = jax.device_count() - prompt = num_samples * [prompt] - init_image = num_samples * [init_image] - mask_image = num_samples * [mask_image] - prompt_ids, processed_masked_images, processed_masks = pipeline.prepare_inputs(prompt, init_image, mask_image) - - # shard inputs and rng - params = replicate(params) - prng_seed = jax.random.split(prng_seed, jax.device_count()) - prompt_ids = shard(prompt_ids) - processed_masked_images = shard(processed_masked_images) - processed_masks = shard(processed_masks) - - output = pipeline( - prompt_ids, processed_masks, processed_masked_images, params, prng_seed, num_inference_steps, jit=True - ) - - images = output.images.reshape(num_samples, 512, 512, 3) - - image_slice = images[0, 253:256, 253:256, -1] - - output_slice = jnp.asarray(jax.device_get(image_slice.flatten())) - expected_slice = jnp.array( - [0.3611307, 0.37649736, 0.3757408, 0.38213953, 0.39295167, 0.3841631, 0.41554978, 0.4137475, 0.4217084] - ) - print(f"output_slice: {output_slice}") - assert jnp.abs(output_slice - expected_slice).max() < 1e-2 diff --git a/spaces/deepwisdom/MetaGPT/metagpt/utils/special_tokens.py b/spaces/deepwisdom/MetaGPT/metagpt/utils/special_tokens.py deleted file mode 100644 index 2adb93c7781f00e1952e720a66a33dd353a9cc57..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/metagpt/utils/special_tokens.py +++ /dev/null @@ -1,4 +0,0 @@ -# token to separate different code messages in a WriteCode Message content -MSG_SEP = "#*000*#" -# token to seperate file name and the actual code text in a code message -FILENAME_CODE_SEP = "#*001*#" diff --git a/spaces/deepwisdom/MetaGPT/tests/metagpt/tools/test_prompt_generator.py b/spaces/deepwisdom/MetaGPT/tests/metagpt/tools/test_prompt_generator.py deleted file mode 100644 index d2e870c6d710bce2096f470026b25a3510b2a5b6..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/tests/metagpt/tools/test_prompt_generator.py +++ /dev/null @@ -1,58 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/2 17:46 -@Author : alexanderwu -@File : test_prompt_generator.py -""" - -import pytest - -from metagpt.logs import logger -from metagpt.tools.prompt_writer import ( - BEAGECTemplate, - EnronTemplate, - GPTPromptGenerator, - WikiHowTemplate, -) - - -@pytest.mark.usefixtures("llm_api") -def test_gpt_prompt_generator(llm_api): - generator = GPTPromptGenerator() - example = "商品名称:WonderLab 新肌果味代餐奶昔 小胖瓶 胶原蛋白升级版 饱腹代餐粉6瓶 75g/瓶(6瓶/盒) 店铺名称:金力宁食品专营店 " \ - "品牌:WonderLab 保质期:1年 产地:中国 净含量:450g" - - results = llm_api.ask_batch(generator.gen(example)) - logger.info(results) - assert len(results) > 0 - - -@pytest.mark.usefixtures("llm_api") -def test_wikihow_template(llm_api): - template = WikiHowTemplate() - question = "learn Python" - step = 5 - - results = template.gen(question, step) - assert len(results) > 0 - assert any("Give me 5 steps to learn Python." in r for r in results) - - -@pytest.mark.usefixtures("llm_api") -def test_enron_template(llm_api): - template = EnronTemplate() - subj = "Meeting Agenda" - - results = template.gen(subj) - assert len(results) > 0 - assert any("Write an email with the subject \"Meeting Agenda\"." in r for r in results) - - -def test_beagec_template(): - template = BEAGECTemplate() - - results = template.gen() - assert len(results) > 0 - assert any("Edit and revise this document to improve its grammar, vocabulary, spelling, and style." - in r for r in results) diff --git a/spaces/dhansmair/flamingo-mini-cap/app.py b/spaces/dhansmair/flamingo-mini-cap/app.py deleted file mode 100644 index 4981768c4155837b84e0ca9f6b2f5a9b44786c4e..0000000000000000000000000000000000000000 --- a/spaces/dhansmair/flamingo-mini-cap/app.py +++ /dev/null @@ -1,50 +0,0 @@ -import os -import gradio as gr -import torch -import PIL - -from flamingo_mini import FlamingoConfig, FlamingoModel, FlamingoProcessor - - - -EXAMPLES_DIR = 'examples' -DEFAULT_PROMPT = "" - -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - -model = FlamingoModel.from_pretrained('dhansmair/flamingo-mini') -model.to(device) -model.eval() - -processor = FlamingoProcessor(model.config) - -# setup some example images -examples = [] -if os.path.isdir(EXAMPLES_DIR): - for file in os.listdir(EXAMPLES_DIR): - path = EXAMPLES_DIR + "/" + file - examples.append([path, DEFAULT_PROMPT]) - - -def predict_caption(image, prompt): - assert isinstance(prompt, str) - - caption = model.generate_captions( - processor, - images=image, - prompt=prompt - ) - - if isinstance(caption, list): - caption = caption[0] - - return caption - - -iface = gr.Interface(fn=predict_caption, - inputs=[gr.Image(type="pil"), gr.Textbox(value=DEFAULT_PROMPT, label="Prompt")], - examples=examples, - outputs="text") - -iface.launch() - diff --git a/spaces/diacanFperku/AutoGPT/Coolorus 2.5.14.md b/spaces/diacanFperku/AutoGPT/Coolorus 2.5.14.md deleted file mode 100644 index fb16899a0b95130477d020ac0dbdb70a1c627ffb..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Coolorus 2.5.14.md +++ /dev/null @@ -1,10 +0,0 @@ -
        -

        d. coolorus is a great tool to have especially if you want to work with cool colors in your designs. *coolorus is a good app to use if you are a professional designer wanting to really work with some hot colors. *its not the best tool to use if you are a new designer, but it's a great tool for intermediate level designers. *its just not very robust.

        -

        e. coolorus is very convenient and is good for amateur designers and professionals. *it is a very useful extension for designers who work with color more than they use the basic photoshop color wheel. *it can be easily used in conjunction with other extensions.

        -

        Coolorus 2.5.14


        Download File 🗹 https://gohhs.com/2uFTQN



        -

        i think the best route you could take with coolorus is to develop it further to be unique and relevant, then pitch it to adobe and hope they are interested. *heres one idea on how to make it relevant: implement a color scheme ai that suggests additional colors or schemes based upon previous schemes (kinda like genuis for itunes). *that would make the process of picking colors more exploratory, and a designer may find inspiration with a computer generated color scheme (i know i have). hope this helps.

        -

        coolorus ps is an extension for adobe photoshop and flash which provides a versatile color wheel coolorus ps will enable you to pick different colors for any graphic in adobe applications. this application is best suited for those professional who want a better workflow and need to complete their projects in minimum clicks.

        -

        1. coolorus isnt bug free. no software is safe from exploits because exploits are fundamentally a creative endeavor with an infinite number of possible solutions. there is no way that coolorus could defend against an infinite number of undiscovered potential exploits. im sure if you hired a hacker to overflow your stack using a color scheme, they could get coolorus to do all kinds of malicious things.
        2. but we are only human and you might just see a bug in coolorus, so please e-mail us and well fix it in no time. - so, coolorus is bug-free unless someone finds a bug marketing like this really frustrates me because it seems like you are manipulating uninformed people with half-truths. straight-talk will get you a lot further with intelligent consumers.
        3. also, i just tried detaching the color picker in photoshop, and photoshop didnt crash, so maybe that bug got fixed in cs5/5.5. i dont have time to check the other bugs you mention on the website.

        -

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/FoneLab FoneTrans For IOS 9.0.8 Patch.md b/spaces/diacanFperku/AutoGPT/FoneLab FoneTrans For IOS 9.0.8 Patch.md deleted file mode 100644 index 2dfd6db264d6a22710c065809e42f2435fb9c16d..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/FoneLab FoneTrans For IOS 9.0.8 Patch.md +++ /dev/null @@ -1,26 +0,0 @@ - -

        How to Transfer Data from iPhone to PC with FoneLab FoneTrans for iOS 9.0.8 Patch

        -

        If you are looking for a reliable and easy way to transfer data from your iPhone to your PC, you may want to try FoneLab FoneTrans for iOS 9.0.8 Patch. This is a powerful and versatile software that can help you manage and transfer various types of data, such as photos, videos, music, contacts, messages, notes, and more.

        -

        FoneLab FoneTrans for iOS 9.0.8 Patch is the latest version of the software that comes with some new features and improvements. For example, it supports the latest iOS 15 and iPhone 13 models, it has a faster and more stable data transfer speed, and it has a more user-friendly interface.

        -

        FoneLab FoneTrans for iOS 9.0.8 Patch


        Download File >>> https://gohhs.com/2uFTug



        -

        In this article, we will show you how to use FoneLab FoneTrans for iOS 9.0.8 Patch to transfer data from your iPhone to your PC in a few simple steps.

        -

        Step 1: Download and Install FoneLab FoneTrans for iOS 9.0.8 Patch

        -

        The first thing you need to do is to download and install FoneLab FoneTrans for iOS 9.0.8 Patch on your PC. You can get it from the official website or from the link below:

        -Download FoneLab FoneTrans for iOS 9.0.8 Patch -

        After downloading the software, run the setup file and follow the instructions to install it on your PC.

        -

        Step 2: Connect Your iPhone to Your PC

        -

        Next, you need to connect your iPhone to your PC using a USB cable. Make sure you have unlocked your iPhone and trusted your PC on your device. Once connected, the software will automatically detect your iPhone and show its basic information on the main interface.

        -

        Step 3: Select the Data You Want to Transfer

        -

        Now, you can select the data you want to transfer from your iPhone to your PC. On the left panel of the software, you will see different categories of data, such as Photos, Music, Videos, Contacts, Messages, etc. You can click on each category to preview the data in detail on the right panel.

        -

        To select the data you want to transfer, you can check the boxes next to them or select all by checking the box at the top. You can also use the search bar or the filter options to find the specific data you need.

        -

        Step 4: Transfer Data from iPhone to PC

        -

        Once you have selected the data you want to transfer, you can click on the "Export to PC" button at the bottom right corner of the software. Then, you can choose a destination folder on your PC where you want to save the transferred data.

        -

        The software will start transferring the data from your iPhone to your PC immediately. You can see the progress bar and the time remaining on the screen. Please do not disconnect your iPhone or close the software during the process.

        -

        -

        When the transfer is completed, you will see a pop-up window that shows you how many files have been transferred successfully and how many have failed. You can click on "OK" to finish the process.

        -

        Conclusion

        -

        That's how you can use FoneLab FoneTrans for iOS 9.0.8 Patch to transfer data from your iPhone to your PC easily and quickly. As you can see, this software is very simple and convenient to use, and it can handle various types of data without any hassle.

        -

        If you want to try this software for yourself, you can download it from the link below and enjoy a free trial for 30 days:

        -Download FoneLab FoneTrans for iOS 9.0.8 Patch

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Free Almena Method Typing Torrent BEST.md b/spaces/diacanFperku/AutoGPT/Free Almena Method Typing Torrent BEST.md deleted file mode 100644 index 0d4a2c9b3c312733c2b6ddd99424b0c161a8cad9..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Free Almena Method Typing Torrent BEST.md +++ /dev/null @@ -1,30 +0,0 @@ -

        Free Almena Method Typing Torrent


        Download Filehttps://gohhs.com/2uFU2u



        -
        -nLEARN IT TODAY, REMEMBER IT FOREVER! - -The Almena Method™ IS BASED ON:n - An understanding of the English language to be able to read, write, speak and understand the English language - -n - The 80wpm Keyboarding standard to be able to type and communicate using this standard in order to make life easier for yourself - -n - The ability to memorize the lessons and learn them thoroughly and independently to ensure their continued retention and application. - -n - Finally, the application of the lessons to ensure their continued impact on the student. - -nLEARN TO TOUCH TYPE UP TO 80wpm IN JUST A FEW HOURS!nLEARN IT TODAY, REMEMBER IT FOREVER! - -The Almena Method™ is based on the success of the Almena Touch-typing method, which has been used successfully in schools and colleges in various countries for over 30 years and which is now more widely used than ever before. - -Over the years the Almena Method™ has been the chosen method of many schools and colleges and has gained its reputation as a beneficial, simple, no-nonsense, efficient way to teach learning touch-typing in a form that not only stimulates the child's own interest but also gives him/her the confidence to be able to use the skill and their skills on an everyday basis. - -The Almena Method™ includes the following qualities:n - A thorough and complete method that teaches all aspects of keyboarding to be able to use the keyboard in an efficient manner. - -n - A method that uses the learning standard, commonly known as the '80 wpm' keyboard standard, which ensures that students at all levels of learning receive an effective method that is not only demanding but that also gives the student the knowledge of the standard keyboarding techniques required to be able to develop the skills needed to use the keyboard to communicate efficiently on a daily basis. - -n - A method that stimulates the child's interest and understanding of the keyboard and provides positive reinforcement and positive and motivating guidance to the learner throughout the teaching and learning of the lessons. - -n - A method that teaches all skills on a one-on-one basis and then gradually builds and repeats all skills to ensure the success of the learner and to ensure that the skills will stay in the memory and be kept fresh and in use for the long term. - -n - A method that 4fefd39f24
        -
        -
        -

        diff --git a/spaces/diacanFperku/AutoGPT/Instalaciones Electricas Residenciales Javier Oropeza Pdf 53.md b/spaces/diacanFperku/AutoGPT/Instalaciones Electricas Residenciales Javier Oropeza Pdf 53.md deleted file mode 100644 index 85c93115c10563ba69d1ce4cfba14ff973df9a3c..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Instalaciones Electricas Residenciales Javier Oropeza Pdf 53.md +++ /dev/null @@ -1,15 +0,0 @@ -

        instalaciones electricas residenciales javier oropeza pdf 53


        DOWNLOADhttps://gohhs.com/2uFSYM



        -
        -Model Situatie De Lucrari Constructii Pdf 79 Free 2021.07.11 00:15 ... Instalaciones Electricas Residenciales Javier Oropeza Pdf 53 2021.07.09 02:14 ... Residenciales Javier Oropeza Pdf 53 2021.07.09 02:01 ... -Publicado En Espanol De ZUWO: A Cristina González Pdf 64 2021.07.09 18:06 ... -Construction Books | Newspaper, magazine, and brochures ... -Free Download Books - Real Estate Database from... -Residenciales Javier Oropeza Pdf 53 | 2021.07.09 02:01:53. -Residentiales... -Constructii Electricas Pdf 79 Free; 2021.10.08 19:56:18. -Constructii Electricas... -Codigo de Autor de Pdf Cristina González Pdf 59 ... -2021.10.08 18:36:48. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/diacanFperku/AutoGPT/LEGO MARVEL Super Heroes-FLT [P2PDL] Fitgirl Repack.md b/spaces/diacanFperku/AutoGPT/LEGO MARVEL Super Heroes-FLT [P2PDL] Fitgirl Repack.md deleted file mode 100644 index 3ffd4f1503a45bcff13156f63d6635aaff167ca4..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/LEGO MARVEL Super Heroes-FLT [P2PDL] Fitgirl Repack.md +++ /dev/null @@ -1,6 +0,0 @@ -

        LEGO MARVEL Super Heroes-FLT [P2PDL] Fitgirl Repack


        DOWNLOADhttps://gohhs.com/2uFVrK



        -
        -Join your favorite Super Heroes and Super Villains from different eras and realities as they go head-to-head with the time-traveling…. LEGO® ... 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/diacanFperku/AutoGPT/Password Mathtype 6.9 Cracked Txt File.md b/spaces/diacanFperku/AutoGPT/Password Mathtype 6.9 Cracked Txt File.md deleted file mode 100644 index f3301a97152d07042c3fae4608902a89d580e44e..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Password Mathtype 6.9 Cracked Txt File.md +++ /dev/null @@ -1,157 +0,0 @@ -
        -

        Password Mathtype 6.9 Cracked Txt File: What Is It and How to Use It?

        - -

        If you are looking for a way to create and edit mathematical equations and symbols in your documents, presentations, or web pages, you might have heard of MathType 6.9. MathType 6.9 is a powerful and easy-to-use software that allows you to create and edit equations using a graphical interface or a keyboard shortcut. You can also integrate MathType 6.9 with various applications, such as Microsoft Word, PowerPoint, Excel, Google Docs, Moodle, WordPress, and more.

        -

        Password Mathtype 6.9 Cracked Txt File


        Download 🆓 https://gohhs.com/2uFVIO



        - -

        However, MathType 6.9 is not a free software. You need to buy a license to use it without any limitations or watermarks. The license costs $97 for a single user or $57 per year for a subscription. If you don't want to pay for the license, you might be tempted to look for a cracked version of MathType 6.9 on the internet.

        - -

        One of the most popular ways to crack MathType 6.9 is to use a password mathtype 6.9 cracked txt file. A password mathtype 6.9 cracked txt file is a text file that contains a serial key or a registration code that can activate MathType 6.9 without paying for it. You can download this file from various torrent sites or file-sharing platforms.

        - -

        How to Download Password Mathtype 6.9 Cracked Txt File?

        - -

        To download password mathtype 6.9 cracked txt file, you need to have a torrent client installed on your computer, such as BitTorrent or uTorrent. Then, you need to find a torrent site that offers this file, such as LimeTorrents or TechMazze. You can use a search engine like Google or Bing to find these sites.

        - -

        Once you find a torrent site that has password mathtype 6.9 cracked txt file, you need to click on the download link or magnet link to start the download process. You might need to wait for some time until enough seeders (people who have the complete file) and leechers (people who are downloading the file) are available.

        - -

        After the download is complete, you need to unzip or extract the file using a software like WinRAR or 7-Zip. Then, you need to open the file and copy the serial key or registration code that is inside.

        - -

        How to Use Password Mathtype 6.9 Cracked Txt File?

        - -

        To use password mathtype 6.9 cracked txt file, you need to have MathType 6.9 installed on your computer. You can download MathType 6.9 from the official website of Design Science or from other sources on the internet.

        - -

        After installing MathType 6.9, you need to launch it and go to the Help menu and click on Unlock/Register MathType... A dialog box will appear where you need to enter your name and organization and paste the serial key or registration code that you copied from password mathtype 6.9 cracked txt file.

        - -

        Then, you need to click on OK and restart MathType 6.9. You should see that MathType 6.9 is now activated and you can use it without any limitations or watermarks.

        -

        - -

        What are the Risks of Using Password Mathtype 6.9 Cracked Txt File?

        - -

        While using password mathtype 6.9 cracked txt file might seem like an easy and convenient way to get MathType 6.9 for free, it also comes with some risks and disadvantages that you should be aware of:

        - -
          -
        • It is illegal: Using password mathtype 6.9 cracked txt file is a violation of the intellectual property rights of Design Science, the developer of MathType 6.9. You could face legal consequences if you are caught using it.
        • -
        • It is unethical: Using password mathtype 6.9 cracked txt file is unfair to Design Science, who spent time and money developing MathType 6.9 and providing support and updates for it.
        • -
        • It is unsafe: Using password mathtype 6.9 cracked txt file could expose your computer to viruses, malware, spyware, or other harmful programs that could damage your system or steal your personal information.
        • -
        • It is unreliable: Using password mathtype 6.9 cracked txt file could cause MathType 6.9 to malfunction or crash unexpectedly.
        • -
        • It is outdated: Using password mathtype 6.9 cracked txt file could prevent you from getting the latest updates and features of MathType 6.9.
        • -
        - -

        What are the Alternatives to Password Mathtype 6.9 Cracked Txt File?

        - -

        If you want to use MathType 6.9 without paying for it, but also without using password mathtype 6.9 cracked txt file, there are some alternatives that you can try:

        - -
          -
        • Use the trial version: You can download and use MathType 6.9 for free for 30 days from the official website of Design Science.
        • -
        • Use the free version: You can use MathType Lite for free forever from the official website of Design Science.
        • -
        • Use an online equation editor: You can use an online equation editor such as Mathcha.io or LaTeX Equation Editor that allows you to create and edit equations in your browser.
        • -
        • Use an open source software: You can use an open source software such as LibreOffice Math or TeXstudio that allows you to create and edit equations on your computer.
        • -
        • Buy the license: You can buy the license for MathType 6.9 from Daz3d.com at a reasonable price.
        • -
        - -

        Conclusion

        - -

        Password mathtype 6.9 cracked txt file is a text file that contains a serial key or a registration code that can activate MathType 6.9 without paying for it.

        - -

        You can download this file from various torrent sites or file-sharing platforms and use it to unlock MathType 6.9 on your computer.

        - -

        However, using password mathtype 6.9 cracked txt file also comes with some risks and disadvantages that could harm your computer or get you in trouble with the law.

        - -

        If you want to use MathType 6.9 without paying for it but also without using password mathtype 6.9 cracked txt file, there are some alternatives that you can try such as using the trial version, the free version, an online equation editor, an open source software, or buying the license.

        - -

        We hope this article has helped you understand more about password mathtype 6.9 cracked txt file and how to use it safely and ethically.

        -

        How to Integrate MathType 6.9 with Other Applications?

        - -

        One of the advantages of MathType 6.9 is that it can integrate with various applications that you use for creating and editing documents, presentations, or web pages. This way, you can insert and edit equations directly in your application without switching to MathType 6.9.

        - -

        Some of the applications that MathType 6.9 can integrate with are:

        - -
          -
        • Microsoft Word: You can use MathType 6.9 to create and edit equations in your Word documents. You can also use Word's built-in equation editor and convert your equations to MathType 6.9 format.
        • -
        • Microsoft PowerPoint: You can use MathType 6.9 to create and edit equations in your PowerPoint slides. You can also use PowerPoint's built-in equation editor and convert your equations to MathType 6.9 format.
        • -
        • Microsoft Excel: You can use MathType 6.9 to create and edit equations in your Excel worksheets. You can also use Excel's built-in equation editor and convert your equations to MathType 6.9 format.
        • -
        • Google Docs: You can use MathType 6.9 to create and edit equations in your Google Docs documents. You can also use Google Docs' built-in equation editor and convert your equations to MathType 6.9 format.
        • -
        • Moodle: You can use MathType 6.9 to create and edit equations in your Moodle courses. You can also use Moodle's built-in equation editor and convert your equations to MathType 6.9 format.
        • -
        • WordPress: You can use MathType 6.9 to create and edit equations in your WordPress posts or pages. You can also use WordPress' built-in equation editor and convert your equations to MathType 6.9 format.
        • -
        - -

        To integrate MathType 6.9 with these applications, you need to install the MathType Add-in or Plugin for each application from the official website of Design Science or from other sources on the internet.

        - -

        How to Update MathType 6.9?

        - -

        If you want to get the latest features and bug fixes of MathType 6.9, you need to update it regularly.

        - -

        If you have a licensed version of MathType 6.9, you can update it automatically or manually from the Help menu in MathType 6.9.

        - -

        If you have a cracked version of MathType 6.9 using password mathtype 6.9 cracked txt file, you might not be able to update it automatically or manually from the Help menu in MathType 6.9.

        - -

        In that case, you need to download the latest version of password mathtype 6.9 cracked txt file from a torrent site or file-sharing platform and use it to activate the updated version of MathType 6.9.

        - -

        However, this might not work if Design Science has changed the activation mechanism or if the updated version of password mathtype 6.9 cracked txt file is not available yet.

        - -

        Conclusion

        - -

        Password mathtype 6.9 cracked txt file is a text file that contains a serial key or a registration code that can activate MathType 6.9 without paying for it.

        - -

        You can download this file from various torrent sites or file-sharing platforms and use it to unlock MathType 6.9 on your computer.

        - -

        However, using password mathtype 6.9 cracked txt file also comes with some risks and disadvantages that could harm your computer or get you in trouble with the law.

        - -

        If you want to use MathType 6.9 without paying for it but also without using password mathtype 6.9 cracked txt file, there are some alternatives that you can try such as using the trial version, the free version, an online equation editor, an open source software, or buying the license.

        - -

        We hope this article has helped you understand more about password mathtype 6.9 cracked txt file and how to use it safely and ethically.

        -

        How to Create and Edit Equations with MathType 6.9?

        - -

        Once you have activated MathType 6.9 using password mathtype 6.9 cracked txt file, you can start creating and editing equations with it.

        - -

        There are two ways to create and edit equations with MathType 6.9:

        - -
          -
        • Using the graphical interface: You can use the graphical interface of MathType 6.9 to create and edit equations using your mouse and keyboard. You can choose from various templates, symbols, fonts, colors, and styles to create your equations. You can also use the toolbar buttons and menu commands to insert and edit your equations.
        • -
        • Using the keyboard shortcut: You can use the keyboard shortcut of MathType 6.9 to create and edit equations using your keyboard only. You can type in a special syntax called TeX or LaTeX to create your equations. You can also use the keyboard commands to insert and edit your equations.
        • -
        - -

        You can switch between the graphical interface and the keyboard shortcut by clicking on the Toggle TeX button in MathType 6.9.

        - -

        You can also copy and paste your equations from other sources or applications into MathType 6.9 or vice versa.

        - -

        How to Insert Equations into Other Applications with MathType 6.9?

        - -

        If you have integrated MathType 6.9 with other applications that you use for creating and editing documents, presentations, or web pages, you can insert your equations into these applications directly from MathType 6.9.

        - -

        To insert an equation into another application with MathType 6.9, you need to:

        - -
          -
        1. Create or edit your equation in MathType 6.9: You can use the graphical interface or the keyboard shortcut of MathType 6.9 to create or edit your equation.
        2. -
        3. Copy your equation from MathType 6.9: You can use the Copy button in MathType 6.9 or press Ctrl+C on your keyboard to copy your equation.
        4. -
        5. Paste your equation into another application: You can use the Paste button in another application or press Ctrl+V on your keyboard to paste your equation.
        6. -
        - -

        You can also drag and drop your equation from MathType 6.9 into another application.

        - -

        You can also edit your equation in another application by double-clicking on it or right-clicking on it and choosing Edit Equation.

        - -

        Conclusion

        - -

        Password mathtype 6.9 cracked txt file is a text file that contains a serial key or a registration code that can activate MathType 6.9 without paying for it.

        - -

        You can download this file from various torrent sites or file-sharing platforms and use it to unlock MathType 6.9 on your computer.

        - -

        However, using password mathtype 6.9 cracked txt file also comes with some risks and disadvantages that could harm your computer or get you in trouble with the law.

        - -

        If you want to use MathType 6.9 without paying for it but also without using password mathtype 6.9 cracked txt file, there are some alternatives that you can try such as using the trial version, the free version, an online equation editor, an open source software, or buying the license.

        - -

        We hope this article has helped you understand more about password mathtype 6.9 cracked txt file and how to use it safely and ethically.

        -

        Conclusion

        - -

        Password mathtype 6.9 cracked txt file is a text file that contains a serial key or a registration code that can activate MathType 6.9 without paying for it.

        - -

        You can download this file from various torrent sites or file-sharing platforms and use it to unlock MathType 6.9 on your computer.

        - -

        However, using password mathtype 6.9 cracked txt file also comes with some risks and disadvantages that could harm your computer or get you in trouble with the law.

        - -

        If you want to use MathType 6.9 without paying for it but also without using password mathtype 6.9 cracked txt file, there are some alternatives that you can try such as using the trial version, the free version, an online equation editor, an open source software, or buying the license.

        - -

        We hope this article has helped you understand more about password mathtype 6.9 cracked txt file and how to use it safely and ethically.

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/SRS Audio Sandbox 1.10.2.0 Patch[h33t][eSpNs].zip.md b/spaces/diacanFperku/AutoGPT/SRS Audio Sandbox 1.10.2.0 Patch[h33t][eSpNs].zip.md deleted file mode 100644 index 2c0c7494e4b00260356c95b188cf00cee3d4d081..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/SRS Audio Sandbox 1.10.2.0 Patch[h33t][eSpNs].zip.md +++ /dev/null @@ -1,106 +0,0 @@ - -

        SRS Audio Sandbox 1.10.2.0 Patch[h33t][eSpNs].zip: A Review

        -

        If you are looking for a way to enhance the sound quality of your PC, you might want to check out SRS Audio Sandbox 1.10.2.0 Patch[h33t][eSpNs].zip. This is a software that can improve the audio output of any speaker or headphone, and create a surround sound effect for a more immersive experience.

        -

        SRS Audio Sandbox 1.10.2.0 Patch[h33t][eSpNs].zip


        Download Ziphttps://gohhs.com/2uFUCn



        -

        In this article, we will review SRS Audio Sandbox 1.10.2.0 Patch[h33t][eSpNs].zip, and tell you how to download, install and use it.

        - -

        What is SRS Audio Sandbox 1.10.2.0 Patch[h33t][eSpNs].zip?

        -

        SRS Audio Sandbox 1.10.2.0 Patch[h33t][eSpNs].zip is a compressed file that contains the SRS Audio Sandbox software and a patch to activate it.

        -

        SRS Audio Sandbox is a product developed by SRS Labs, Inc., a company that specializes in audio enhancement technologies. SRS Audio Sandbox is designed to work with any media player, browser or application that uses audio on your PC.

        -

        SRS Audio Sandbox can adjust the sound settings according to your preferences, and optimize the audio quality for different types of speakers or headphones. It can also create a virtual surround sound effect that simulates a 3D audio environment.

        -

        -

        Some of the features of SRS Audio Sandbox are:

        -
          -
        • Customizable sound enhancement presets for music, movies, games and voice.
        • -
        • Advanced controls for bass, treble, space and focus.
        • -
        • Speaker output optimization for laptop, desktop or external speakers.
        • -
        • Headphone output optimization for earbuds, over-ear or on-ear headphones.
        • -
        • TruSurround HD technology that creates a realistic surround sound effect from two speakers or headphones.
        • -
        • TruVolume technology that maintains a consistent volume level across different sources and content.
        • -
        - -

        How to download SRS Audio Sandbox 1.10.2.0 Patch[h33t][eSpNs].zip?

        -

        To download SRS Audio Sandbox 1.10.2.0 Patch[h33t][eSpNs].zip, you need to find a reliable source that offers this file. One of the options is to use a torrent site that hosts this file.

        -

        A torrent site is a website that allows users to share files using a peer-to-peer network protocol called BitTorrent. BitTorrent is a method of distributing large amounts of data over the internet without relying on a central server.

        -

        To use BitTorrent, you need to have a torrent client software installed on your PC, such as uTorrent, BitTorrent or Vuze. A torrent client software is an application that can download and upload files using the BitTorrent protocol.

        -

        Once you have a torrent client software installed, you can search for SRS Audio Sandbox 1.10.2.0 Patch[h33t][eSpNs].zip on a torrent site, such as The Pirate Bay, Kickass Torrents or 1337x. You can use the search bar or browse the categories to find the file you want.

        -

        When you find the file you want, you can click on the download link or the magnet link to start downloading it using your torrent client software. A magnet link is a type of link that contains information about the file, such as its name, size and hash value, without requiring an intermediate server.

        -

        The download speed and time may vary depending on the number of seeders and leechers available for the file. Seeders are users who have downloaded the file and are sharing it with others. Leechers are users who are downloading the file but are not sharing it with others.

        - -

        How to install and use SRS Audio Sandbox 1.10.2.0 Patch[h33t][eSpNs].zip?

        -

        After downloading SRS Audio Sandbox 1.10.2.0 Patch[h33t][eSpNs].zip, you need to extract it using a software that can handle ZIP files, such as WinRAR, WinZip or 7-Zip.

        -

        To extract SRS Audio Sandbox 1.10.2.0 Patch[h33t][eSpNs].zip, you need to right-click on the file and select "Extract Here" or "Extract to SRS_Audio_SandBox_1\.10\.2\.0/" from the menu.

        -

        This will create a folder named "SRS_Audio_SandBox_1\.10\.2\.0" that contains two files: "SAS.exe" and "Patch.exe".

        -

        To install SRS Audio Sandbox 1\.10\.2\.0, you need to double-click on "SAS.exe" and follow the instructions on the screen.

        -

        To activate SRS Audio Sandbox 1\.10\.2\.0, you need to double-click on "Patch.exe" and click on "Patch" button.

        -

        This will patch the SRS Audio Sandbox software and make it fully functional.

        -

        To use SRS Audio Sandbox 1\.10\.2\.0, you need to launch it from your desktop shortcut or start menu.

        -

        You will see a user interface that allows you to adjust the sound settings according to your preferences.

        -

        You can choose from different presets for music, movies, games and voice, or customize your own settings using the advanced controls.

        -

        You can also select the type of speaker or headphone output you are using, and enable or disable the TruSurround HD and TruVolume technologies.

        - -

        Conclusion

        -

        SRS Audio Sandbox 1\.10\.2\.0 Patch[h33t][eSpNs].zip is a software that can enhance the sound quality of your PC and create a surround sound effect for any speaker or headphone.

        -

        To download SRS Audio Sandbox 1\.10\.2\.0 Patch[h33t][eSpNs].zip, you need to use a torrent site and a torrent client software.

        -

        To install and use SRS Audio Sandbox 1\.10\.2\.0 Patch[h33t][eSpNs].zip, you need to extract it using a ZIP software and run the SAS.exe and Patch.exe files.

        -

        We hope this article has been helpful for you in reviewing SRS Audio Sandbox 1\.10\.2\.0 Patch[h33t][eSpNs].zip.

        -

        What are the benefits of using SRS Audio Sandbox 1.10.2.0 Patch[h33t][eSpNs].zip?

        -

        Using SRS Audio Sandbox 1.10.2.0 Patch[h33t][eSpNs].zip can bring you many benefits for your PC sound quality and enjoyment.

        -

        Some of the benefits are:

        -
          -
        • You can experience a richer and more natural sound from any speaker or headphone.
        • -
        • You can customize the sound settings to suit your personal taste and mood.
        • -
        • You can enjoy a more immersive and realistic surround sound effect from any stereo source.
        • -
        • You can avoid volume fluctuations and distortions across different sources and content.
        • -
        • You can enhance the clarity and intelligibility of voice and dialogues.
        • -
        - -

        What are the drawbacks of using SRS Audio Sandbox 1.10.2.0 Patch[h33t][eSpNs].zip?

        -

        While SRS Audio Sandbox 1.10.2.0 Patch[h33t][eSpNs].zip is a great software for improving your PC sound quality, it also has some drawbacks that you should be aware of.

        -

        Some of the drawbacks are:

        -
          -
        • You need to download and install a torrent client software and a ZIP software to get the file.
        • -
        • You need to use a patch to activate the software, which may not be legal or safe.
        • -
        • You may encounter compatibility issues with some media players, browsers or applications that use audio on your PC.
        • -
        • You may experience performance issues or crashes if your PC does not meet the minimum system requirements for the software.
        • -
        - -

        Is SRS Audio Sandbox 1.10.2.0 Patch[h33t][eSpNs].zip worth trying?

        -

        SRS Audio Sandbox 1.10.2.0 Patch[h33t][eSpNs].zip is a software that can significantly improve your PC sound quality and create a surround sound effect for any speaker or headphone.

        -

        If you are looking for a way to enhance your PC audio experience, you may want to give it a try.

        -

        However, you should also consider the drawbacks of using this software, such as the need to download and install additional software, the use of a patch, and the potential compatibility and performance issues.

        -

        You should also be careful about the source of the file, and make sure it is safe and virus-free.

        -

        Alternatively, you may want to look for other options that are more legal, safe and reliable, such as official audio enhancement software from reputable developers or manufacturers.

        -

        How to uninstall SRS Audio Sandbox 1.10.2.0 Patch[h33t][eSpNs].zip?

        -

        If you want to uninstall SRS Audio Sandbox 1.10.2.0 Patch[h33t][eSpNs].zip from your PC, you need to follow these steps:

        -
          -
        1. Close any media player, browser or application that uses audio on your PC.
        2. -
        3. Go to the Control Panel and select "Programs and Features".
        4. -
        5. Find and select "SRS Audio Sandbox" from the list of installed programs.
        6. -
        7. Click on "Uninstall" and follow the instructions on the screen.
        8. -
        9. Delete the folder "SRS_Audio_SandBox_1\.10\.2\.0" from your PC.
        10. -
        -

        This will remove SRS Audio Sandbox 1.10.2.0 Patch[h33t][eSpNs].zip from your PC completely.

        - -

        What are some alternatives to SRS Audio Sandbox 1.10.2.0 Patch[h33t][eSpNs].zip?

        -

        If you are looking for some alternatives to SRS Audio Sandbox 1.10.2.0 Patch[h33t][eSpNs].zip, you may want to consider these options:

        -
          -
        • Dolby Atmos for PC: This is a software that can create a surround sound effect from any stereo source using Dolby's advanced audio processing technology. It can also optimize the sound quality for different types of speakers or headphones.
        • -
        • Boom 3D for Windows: This is a software that can enhance the sound quality of your PC and create a 3D audio effect using a patented algorithm. It can also adjust the sound settings according to your preferences and mood.
        • -
        • FxSound Enhancer: This is a software that can improve the sound quality of your PC and create a surround sound effect using a dynamic gain boosting technology. It can also customize the sound settings for different genres of music and content.
        • -
        - -

        Conclusion

        -

        SRS Audio Sandbox 1\.10\.2\.0 Patch[h33t][eSpNs].zip is a software that can enhance the sound quality of your PC and create a surround sound effect for any speaker or headphone.

        -

        It has many features and benefits, but also some drawbacks and risks.

        -

        You need to download and install additional software, use a patch, and deal with potential compatibility and performance issues.

        -

        You may also want to look for other options that are more legal, safe and reliable, such as official audio enhancement software from reputable developers or manufacturers.

        -

        We hope this article has been helpful for you in reviewing SRS Audio Sandbox 1\.10\.2\.0 Patch[h33t][eSpNs].zip. -

        Conclusion

        -

        SRS Audio Sandbox 1\.10\.2\.0 Patch[h33t][eSpNs].zip is a software that can enhance the sound quality of your PC and create a surround sound effect for any speaker or headphone.

        -

        It has many features and benefits, but also some drawbacks and risks.

        -

        You need to download and install additional software, use a patch, and deal with potential compatibility and performance issues.

        -

        You may also want to look for other options that are more legal, safe and reliable, such as official audio enhancement software from reputable developers or manufacturers.

        -

        We hope this article has been helpful for you in reviewing SRS Audio Sandbox 1\.10\.2\.0 Patch[h33t][eSpNs].zip.

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/digitalxingtong/Xingtong-Read-Dongmuchang-Bert-VITS2/transcribe_genshin.py b/spaces/digitalxingtong/Xingtong-Read-Dongmuchang-Bert-VITS2/transcribe_genshin.py deleted file mode 100644 index acc98814af6189d129ab85946525bec55419a33f..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Xingtong-Read-Dongmuchang-Bert-VITS2/transcribe_genshin.py +++ /dev/null @@ -1,78 +0,0 @@ -# coding=gbk -import os -import argparse -import librosa -import numpy as np -from multiprocessing import Pool, cpu_count - -import soundfile -from scipy.io import wavfile -from tqdm import tqdm - -global speaker_annos -speaker_annos = [] - -def process(item): - spkdir, wav_name, args = item - speaker = spkdir.replace("\\", "/").split("/")[-1] - wav_path = os.path.join(args.in_dir, speaker, wav_name) - if os.path.exists(wav_path) and '.wav' in wav_path: - os.makedirs(os.path.join(args.out_dir, speaker), exist_ok=True) - wav, sr = librosa.load(wav_path, sr=args.sr) - soundfile.write( - os.path.join(args.out_dir, speaker, wav_name), - wav, - sr - ) - -def process_text(item): - spkdir, wav_name, args = item - speaker = spkdir.replace("\\", "/").split("/")[-1] - wav_path = os.path.join(args.in_dir, speaker, wav_name) - global speaker_annos - tr_name = wav_name.replace('.wav', '') - with open(args.out_dir+'/'+speaker+'/'+tr_name+'.lab', "r", encoding="utf-8") as file: - text = file.read() - text = text.replace("{NICKNAME}",'') - text = text.replace("{M#}{F#}",'') - text = text.replace("{M#}{F#}",'') - substring = "{M#}{F#}" - if substring in text: - if tr_name.endswith("a"): - text = text.replace("{M#}{F#}",'') - if tr_name.endswith("b"): - text = text.replace("{M#}{F#}",'') - text = text.replace("#",'') - text = "ZH|" + text + "\n" # - speaker_annos.append(args.out_dir+'/'+speaker+'/'+wav_name+ "|" + speaker + "|" + text) - - - -if __name__ == "__main__": - parent_dir = "./genshin_dataset/" - speaker_names = list(os.walk(parent_dir))[0][1] - parser = argparse.ArgumentParser() - parser.add_argument("--sr", type=int, default=44100, help="sampling rate") - parser.add_argument("--in_dir", type=str, default="./genshin_dataset", help="path to source dir") - parser.add_argument("--out_dir", type=str, default="./genshin_dataset", help="path to target dir") - args = parser.parse_args() - # processs = 8 - processs = cpu_count()-2 if cpu_count() >4 else 1 - pool = Pool(processes=processs) - - for speaker in os.listdir(args.in_dir): - spk_dir = os.path.join(args.in_dir, speaker) - if os.path.isdir(spk_dir): - print(spk_dir) - for _ in tqdm(pool.imap_unordered(process, [(spk_dir, i, args) for i in os.listdir(spk_dir) if i.endswith("wav")])): - pass - for i in os.listdir(spk_dir): - if i.endswith("wav"): - pro=(spk_dir, i, args) - process_text(pro) - if len(speaker_annos) == 0: - print("transcribe error!!!") - with open("./filelists/short_character_anno.list", 'w', encoding='utf-8') as f: - for line in speaker_annos: - f.write(line) - print("transcript file finished.") diff --git a/spaces/dineshreddy/WALT/mmdet/models/detectors/vfnet.py b/spaces/dineshreddy/WALT/mmdet/models/detectors/vfnet.py deleted file mode 100644 index e23f89674c919921219ffd3486587a2d3c318fbd..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/models/detectors/vfnet.py +++ /dev/null @@ -1,18 +0,0 @@ -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class VFNet(SingleStageDetector): - """Implementation of `VarifocalNet - (VFNet).`_""" - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(VFNet, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/spaces/dolceschokolade/chatbot-mini/types/error.ts b/spaces/dolceschokolade/chatbot-mini/types/error.ts deleted file mode 100644 index 2a4c971d891ecd79e286fc49f9916b5ec4150e23..0000000000000000000000000000000000000000 --- a/spaces/dolceschokolade/chatbot-mini/types/error.ts +++ /dev/null @@ -1,5 +0,0 @@ -export interface ErrorMessage { - code: String | null; - title: String; - messageLines: String[]; -} diff --git a/spaces/doluvor/faster-whisper-webui/src/whisper/whisperContainer.py b/spaces/doluvor/faster-whisper-webui/src/whisper/whisperContainer.py deleted file mode 100644 index 7826d28aa3e6b345febdbd1e6297b4bba9e7fbdc..0000000000000000000000000000000000000000 --- a/spaces/doluvor/faster-whisper-webui/src/whisper/whisperContainer.py +++ /dev/null @@ -1,216 +0,0 @@ -# External programs -import abc -import os -import sys -from typing import List -from urllib.parse import urlparse -import torch -import urllib3 -from src.hooks.progressListener import ProgressListener - -import whisper -from whisper import Whisper - -from src.config import ModelConfig, VadInitialPromptMode -from src.hooks.whisperProgressHook import create_progress_listener_handle - -from src.modelCache import GLOBAL_MODEL_CACHE, ModelCache -from src.prompts.abstractPromptStrategy import AbstractPromptStrategy -from src.utils import download_file -from src.whisper.abstractWhisperContainer import AbstractWhisperCallback, AbstractWhisperContainer - -class WhisperContainer(AbstractWhisperContainer): - def __init__(self, model_name: str, device: str = None, compute_type: str = "float16", - download_root: str = None, - cache: ModelCache = None, models: List[ModelConfig] = []): - if device is None: - device = "cuda" if torch.cuda.is_available() else "cpu" - super().__init__(model_name, device, compute_type, download_root, cache, models) - - def ensure_downloaded(self): - """ - Ensure that the model is downloaded. This is useful if you want to ensure that the model is downloaded before - passing the container to a subprocess. - """ - # Warning: Using private API here - try: - root_dir = self.download_root - model_config = self._get_model_config() - - if root_dir is None: - root_dir = os.path.join(os.path.expanduser("~"), ".cache", "whisper") - - if self.model_name in whisper._MODELS: - whisper._download(whisper._MODELS[self.model_name], root_dir, False) - else: - # If the model is not in the official list, see if it needs to be downloaded - model_config.download_url(root_dir) - return True - - except Exception as e: - # Given that the API is private, it could change at any time. We don't want to crash the program - print("Error pre-downloading model: " + str(e)) - return False - - def _get_model_config(self) -> ModelConfig: - """ - Get the model configuration for the model. - """ - for model in self.models: - if model.name == self.model_name: - return model - return None - - def _create_model(self): - print("Loading whisper model " + self.model_name) - model_config = self._get_model_config() - - # Note that the model will not be downloaded in the case of an official Whisper model - model_path = self._get_model_path(model_config, self.download_root) - - return whisper.load_model(model_path, device=self.device, download_root=self.download_root) - - def create_callback(self, language: str = None, task: str = None, - prompt_strategy: AbstractPromptStrategy = None, - **decodeOptions: dict) -> AbstractWhisperCallback: - """ - Create a WhisperCallback object that can be used to transcript audio files. - - Parameters - ---------- - language: str - The target language of the transcription. If not specified, the language will be inferred from the audio content. - task: str - The task - either translate or transcribe. - prompt_strategy: AbstractPromptStrategy - The prompt strategy to use. If not specified, the prompt from Whisper will be used. - decodeOptions: dict - Additional options to pass to the decoder. Must be pickleable. - - Returns - ------- - A WhisperCallback object. - """ - return WhisperCallback(self, language=language, task=task, prompt_strategy=prompt_strategy, **decodeOptions) - - def _get_model_path(self, model_config: ModelConfig, root_dir: str = None): - from src.conversion.hf_converter import convert_hf_whisper - """ - Download the model. - - Parameters - ---------- - model_config: ModelConfig - The model configuration. - """ - # See if path is already set - if model_config.path is not None: - return model_config.path - - if root_dir is None: - root_dir = os.path.join(os.path.expanduser("~"), ".cache", "whisper") - - model_type = model_config.type.lower() if model_config.type is not None else "whisper" - - if model_type in ["huggingface", "hf"]: - model_config.path = model_config.url - destination_target = os.path.join(root_dir, model_config.name + ".pt") - - # Convert from HuggingFace format to Whisper format - if os.path.exists(destination_target): - print(f"File {destination_target} already exists, skipping conversion") - else: - print("Saving HuggingFace model in Whisper format to " + destination_target) - convert_hf_whisper(model_config.url, destination_target) - - model_config.path = destination_target - - elif model_type in ["whisper", "w"]: - model_config.path = model_config.url - - # See if URL is just a file - if model_config.url in whisper._MODELS: - # No need to download anything - Whisper will handle it - model_config.path = model_config.url - elif model_config.url.startswith("file://"): - # Get file path - model_config.path = urlparse(model_config.url).path - # See if it is an URL - elif model_config.url.startswith("http://") or model_config.url.startswith("https://"): - # Extension (or file name) - extension = os.path.splitext(model_config.url)[-1] - download_target = os.path.join(root_dir, model_config.name + extension) - - if os.path.exists(download_target) and not os.path.isfile(download_target): - raise RuntimeError(f"{download_target} exists and is not a regular file") - - if not os.path.isfile(download_target): - download_file(model_config.url, download_target) - else: - print(f"File {download_target} already exists, skipping download") - - model_config.path = download_target - # Must be a local file - else: - model_config.path = model_config.url - - else: - raise ValueError(f"Unknown model type {model_type}") - - return model_config.path - -class WhisperCallback(AbstractWhisperCallback): - def __init__(self, model_container: WhisperContainer, language: str = None, task: str = None, - prompt_strategy: AbstractPromptStrategy = None, - **decodeOptions: dict): - self.model_container = model_container - self.language = language - self.task = task - self.prompt_strategy = prompt_strategy - - self.decodeOptions = decodeOptions - - def invoke(self, audio, segment_index: int, prompt: str, detected_language: str, progress_listener: ProgressListener = None): - """ - Peform the transcription of the given audio file or data. - - Parameters - ---------- - audio: Union[str, np.ndarray, torch.Tensor] - The audio file to transcribe, or the audio data as a numpy array or torch tensor. - segment_index: int - The target language of the transcription. If not specified, the language will be inferred from the audio content. - task: str - The task - either translate or transcribe. - progress_listener: ProgressListener - A callback to receive progress updates. - """ - model = self.model_container.get_model() - - if progress_listener is not None: - with create_progress_listener_handle(progress_listener): - return self._transcribe(model, audio, segment_index, prompt, detected_language) - else: - return self._transcribe(model, audio, segment_index, prompt, detected_language) - - def _transcribe(self, model: Whisper, audio, segment_index: int, prompt: str, detected_language: str): - decodeOptions = self.decodeOptions.copy() - - # Add fp16 - if self.model_container.compute_type in ["fp16", "float16"]: - decodeOptions["fp16"] = True - - initial_prompt = self.prompt_strategy.get_segment_prompt(segment_index, prompt, detected_language) \ - if self.prompt_strategy else prompt - - result = model.transcribe(audio, \ - language=self.language if self.language else detected_language, task=self.task, \ - initial_prompt=initial_prompt, \ - **decodeOptions - ) - - # If we have a prompt strategy, we need to increment the current prompt - if self.prompt_strategy: - self.prompt_strategy.on_segment_finished(segment_index, prompt, detected_language, result) - - return result \ No newline at end of file diff --git a/spaces/dreamreyansan/hakurei-waifu-diffusion/app.py b/spaces/dreamreyansan/hakurei-waifu-diffusion/app.py deleted file mode 100644 index ccef706bf3035fe470bf6a4f5bd701b18bf59133..0000000000000000000000000000000000000000 --- a/spaces/dreamreyansan/hakurei-waifu-diffusion/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/hakurei/waifu-diffusion").launch() \ No newline at end of file diff --git a/spaces/duycse1603/math2tex/ScanSSD/IOU_lib/__init__.py b/spaces/duycse1603/math2tex/ScanSSD/IOU_lib/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/elplaguister/Yuuka_TTS/src/mel_processing.py b/spaces/elplaguister/Yuuka_TTS/src/mel_processing.py deleted file mode 100644 index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000 --- a/spaces/elplaguister/Yuuka_TTS/src/mel_processing.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/en-gin-eer/StableDiffusion-BaseModel-Lora-Graph/README.md b/spaces/en-gin-eer/StableDiffusion-BaseModel-Lora-Graph/README.md deleted file mode 100644 index a7693cf36532fdf4ec002f1c300d19e8c774086f..0000000000000000000000000000000000000000 --- a/spaces/en-gin-eer/StableDiffusion-BaseModel-Lora-Graph/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: StableDiffusion Lora Graph -emoji: 👁 -colorFrom: indigo -colorTo: yellow -sdk: streamlit -sdk_version: 1.25.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/eskayML/mask_segmentation/README.md b/spaces/eskayML/mask_segmentation/README.md deleted file mode 100644 index 74f39815cc2790abc6f0adc9e359727103efed5b..0000000000000000000000000000000000000000 --- a/spaces/eskayML/mask_segmentation/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Mask Segmentation -emoji: 📈 -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/eson/tokenizer-arena/vocab/glm_chinese/utils.py b/spaces/eson/tokenizer-arena/vocab/glm_chinese/utils.py deleted file mode 100644 index 5b6d9e51188eea82d96a1edc60e97fbb895e6f46..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/vocab/glm_chinese/utils.py +++ /dev/null @@ -1,8 +0,0 @@ -import torch - -def print_rank_0(message): - if torch.distributed.is_initialized(): - if torch.distributed.get_rank() == 0: - print(message, flush=True) - else: - print(message, flush=True) \ No newline at end of file diff --git a/spaces/ethan-ai/VideoRetalking/README.md b/spaces/ethan-ai/VideoRetalking/README.md deleted file mode 100644 index 2b015638f6323a0fa790287de1d6dc556109bd5d..0000000000000000000000000000000000000000 --- a/spaces/ethan-ai/VideoRetalking/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: VideoRetalking -emoji: ⚡ -colorFrom: green -colorTo: green -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/falterWliame/Face_Mask_Detection/God Of War 3 Pkg ((TOP)).md b/spaces/falterWliame/Face_Mask_Detection/God Of War 3 Pkg ((TOP)).md deleted file mode 100644 index 7f7362ee3bf1c65e96e13db20f7bc9ddc1648db7..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/God Of War 3 Pkg ((TOP)).md +++ /dev/null @@ -1,112 +0,0 @@ - -

        God of War 3 PKG: A Complete Guide for PS3 and PS4 Gamers

        - -

        God of War 3 PKG is a file format that enables you to install and play the God of War 3 game on your PS3 and PS4 consoles without using a disc. God of War 3 is an epic action-adventure game that was released in 2010 by Sony Computer Entertainment and developed by SCE Santa Monica Studio. It is the fifth game in the God of War series and the sequel to God of War 2. The game follows the story of Kratos, a former god of war who seeks revenge against his father Zeus and the other Olympian gods. The game features amazing battles, graphics, and gameplay.

        - -

        In this article, we will explain everything you need to know about God of War 3 PKG, including how to download it, how to install it, and how to play it on your PS3 and PS4 consoles. We will also answer some common questions about the game and its PKG format. By the end of this article, you will be able to enjoy God of War 3 PKG on your consoles with ease.

        -

        god of war 3 pkg


        Download Zip >>>>> https://urlca.com/2uDdCy



        - -

        What is God of War 3?

        - -

        God of War 3 is an action-adventure hack and slash game that follows the story of Kratos, a former god of war who seeks revenge against his father Zeus and the other Olympian gods. The game is set in ancient Greece and features epic battles, stunning graphics, and a variety of weapons and abilities.

        - -

        The game is divided into several chapters, each with its own objectives and challenges. The player controls Kratos from a fixed-camera perspective in combat, platforming, and puzzle games. The enemies are a variety of Greek mythological creatures, such as centaurs, harpies, cyclopes, satyrs, minotaurs, sirens, cerberuses, and gorgons. The player can also fight against gods and titans, such as Poseidon, Hades, Helios, Hermes, Hercules, Cronos, Gaia, and Zeus.

        - -

        The player can use different weapons and items to attack and defend against enemies. The main weapon is the Blades of Exile, a pair of chained blades that can be swung in various directions. The player can also use secondary weapons, such as the Claws of Hades, the Nemean Cestus, the Nemesis Whip, and the Blade of Olympus. The player can also use items, such as the Bow of Apollo, the Head of Helios, the Boots of Hermes, and the Icarus Wings.

        - -

        The player can also use magic and rage abilities to enhance their combat skills. The magic abilities are based on the elements of fire, ice, lightning, and soul. The rage abilities are based on the power of the gods or titans that Kratos has defeated or allied with. The player can also perform quick-time events to finish off enemies or interact with objects.

        - -

        The game also features puzzles that require logic and timing to solve. Some puzzles involve moving objects or activating mechanisms to progress through the game. Some puzzles involve using musical notes or symbols to unlock doors or secrets. Some puzzles involve using environmental elements or enemy abilities to overcome obstacles.

        - -

        What is God of War 3 PKG?

        - -

        PKG is a file format that is used to install games and applications on PS3 and PS4 consoles. PKG files can be downloaded from various sources online and transferred to the console via USB or FTP. PKG files can be installed on both jailbroken and non-jailbroken consoles

        -

        -

        How to Install God of War 3 PKG on PS3?

        - -

        If you want to install God of War 3 PKG on your PS3 console, you need to follow these simple steps:

        - -
          -
        1. Extract the zip file that contains the God of War 3 PKG file using WinRAR or any other extraction software.
        2. -
        3. Copy the God of War 3 PKG file to a USB flash drive or an external hard drive formatted in FAT32.
        4. -
        5. Plug the USB flash drive or external hard drive into your PS3 console.
        6. -
        7. Turn on your PS3 console and make sure it has been jailbroken or exploited with HEN, CFW, or HFW.
        8. -
        9. Go to Package Manager > Install Package Files > Standard Package Location.
        10. -
        11. Select the God of War 3 PKG file from the list and press X to install it.
        12. -
        13. Wait for the installation to complete. This may take some time depending on the size of the PKG file.
        14. -
        15. Once the installation is done, you will see a new icon for God of War 3 on your PS3 home screen.
        16. -
        17. Select the icon and press X to launch the game.
        18. -
        - -
        How to Install God of War 3 PKG on PS4?
        - -

        If you want to install God of War 3 PKG on your PS4 console, you need to follow these simple steps:

        - -
          -
        1. Extract the zip file that contains the God of War 3 PKG file using WinRAR or any other extraction software.
        2. -
        3. Rename the God of War 3 PKG file to match your PS4 firmware version. For example, if your PS4 firmware version is 5.05, rename the file to EP9000-CUSA01715_00-0000GODOFWAR2018-A0505-V0100.pkg.
        4. -
        5. Copy the renamed God of War 3 PKG file to a USB flash drive or an external hard drive formatted in exFAT.
        6. -
        7. Plug the USB flash drive or external hard drive into your PS4 console.
        8. -
        9. Turn on your PS4 console and make sure it has been jailbroken with firmware version 5.05 or lower.
        10. -
        11. Go to Settings > Debug Settings > Game > Package Installer.
        12. -
        13. Select the renamed God of War 3 PKG file from the list and press X to install it.
        14. -
        15. Wait for the installation to complete. This may take some time depending on the size of the PKG file.
        16. -
        17. Once the installation is done, you will see a new icon for God of War 3 on your PS4 home screen.
        18. -
        19. Select the icon and press X to launch the game.
        20. -
        - -
        Frequently Asked Questions about God of War 3 PKG
        - -

        Here are some frequently asked questions and answers about God of War 3 PKG:

        - -
          -
        • Q: Do I need a jailbroken or exploited console to play God of War 3 PKG?
        • -
        • A: Yes, you need a jailbroken or exploited console to play God of War 3 PKG. You cannot play it on a regular console without any modifications.
        • -
        • Q: Do I need an internet connection to play God of War 3 PKG?
        • -
        • A: No, you do not need an internet connection to play God of War 3 PKG. You can play it offline without any problems.
        • -
        • Q: Do I need any updates or DLCs to play God of War 3 PKG?
        • -
        • A: No, you do not need any updates or DLCs to play God of War 3 PKG. You can play it with the base version. However, if you want to add more content or features, you can download and install updates and DLCs from various sources online.
        • -
        - -Conclusion - -

        God of War 3 PKG is a great way to enjoy one of the best action-adventure games ever made on your PS3 and PS4 consoles. You can download and install it easily with our guide and play it without using a disc. You can also experience amazing graphics, gameplay, and story with this game. We hope this article helped you learn everything you need to know about God of War 3 PKG. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading!

        -

        How to Play God of War 3 PKG on PS3 and PS4?

        - -

        Once you have installed God of War 3 PKG on your PS3 or PS4 console, you can start playing it with the following steps:

        - -
          -
        1. Turn on your console and go to the home screen.
        2. -
        3. Find the icon for God of War 3 and select it with the X button.
        4. -
        5. The game will load and show you the main menu.
        6. -
        7. Select the option that says "New Game" or "Continue" depending on whether you want to start a new game or resume a previous one.
        8. -
        9. Select the difficulty level that suits your preference and skill level. You can choose from Easy, Normal, Hard, or Chaos.
        10. -
        11. The game will start and show you the opening cinematic.
        12. -
        13. Enjoy the game and follow the on-screen instructions and prompts.
        14. -
        - -
        What are the Features and Benefits of God of War 3 PKG?
        - -

        God of War 3 PKG has many features and benefits that make it a great choice for PS3 and PS4 gamers. Here are some of them:

        - -
          -
        • It is free and easy to download and install. You do not need to pay anything or use a disc to play God of War 3 PKG. You can download it from various sources online and install it on your console with our guide. You can also update it and add DLCs if you want.
        • -
        • It is compatible with both PS3 and PS4 consoles. You can play God of War 3 PKG on any PS3 or PS4 console that has been jailbroken or exploited with HEN, CFW, or HFW. You do not need to worry about the firmware version or the model of your console.
        • -
        • It has amazing graphics and gameplay. God of War 3 PKG has stunning graphics that showcase the beauty and brutality of ancient Greece. The game also has smooth and responsive gameplay that lets you unleash your rage and power against your enemies. The game also has a variety of weapons, items, magic, rage, and quick-time events that make the combat more fun and exciting.
        • -
        • It has an epic story and characters. God of War 3 PKG has a captivating story that follows the journey of Kratos as he seeks revenge against his father Zeus and the other Olympian gods. The game also has memorable characters that have their own personalities, motivations, and roles in the story. The game also has many references and connections to Greek mythology that add more depth and richness to the game world.
        • -
        - -
        What are the Tips and Tricks for God of War 3 PKG?
        - -

        If you want to improve your skills and performance in God of War 3 PKG, you can follow these tips and tricks:

        - -
          -
        • Use different weapons and items for different situations. Each weapon and item has its own advantages and disadvantages in combat. For example, the Blades of Exile are good for dealing damage to multiple enemies at once, while the Nemean Cestus are good for breaking enemy armor and shields. You should also use items like the Bow of Apollo or the Head of Helios to attack enemies from a distance or to reveal hidden secrets.
        • -
        • Use magic and rage wisely. Magic and rage abilities are powerful tools that can help you turn the tide of battle. However, they also consume your magic and rage meters, which can only be replenished by collecting blue and red orbs from enemies or chests. Therefore, you should use them sparingly and only when necessary. You should also try to upgrade your magic and rage abilities as soon as possible to make them more effective.
        • -
        • Dodge and block enemy attacks. Dodging and blocking are essential skills that can help you avoid taking damage from enemy attacks. You can dodge by pressing the right analog stick in any direction, which will make Kratos roll away from danger. You can block by holding down the L1 button, which will make Kratos raise his blades in defense. You can also parry by pressing L1 just before an enemy attack hits you, which will make Kratos deflect the attack and create an opening for a counterattack.
        • -
        • Explore every area and collect every item. God of War 3 PKG has many hidden areas -Conclusion - -

          God of War 3 PKG is a great way to enjoy one of the best action-adventure games ever made on your PS3 and PS4 consoles. You can download and install it easily with our guide and play it without using a disc. You can also experience amazing graphics, gameplay, and story with this game. We hope this article helped you learn everything you need to know about God of War 3 PKG. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading!

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Hex-Rays Ida Pro Advanced Edition V6.1.1 PreCracked Keygen ((NEW)).md b/spaces/falterWliame/Face_Mask_Detection/Hex-Rays Ida Pro Advanced Edition V6.1.1 PreCracked Keygen ((NEW)).md deleted file mode 100644 index 2bda902075ce33385b3f411403239a42eba65ccb..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Hex-Rays Ida Pro Advanced Edition V6.1.1 PreCracked Keygen ((NEW)).md +++ /dev/null @@ -1,104 +0,0 @@ -
          -

          Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen: A Comprehensive Guide

          -

          Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen is a cracked version of one of the most powerful and popular reverse engineering tools on the market. It allows you to analyze, debug and modify binary files such as executables, libraries, drivers and firmware. In this article, we will give you an overview of what Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen is, what it can do for you, and how to get it for free.

          -

          Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked keygen


          Download Ziphttps://urlca.com/2uDdwC



          -

          What is Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen?

          -

          Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen is a modified version of the original Hex-Rays Ida Pro Advanced Edition v6.1.1, which is a premium reverse engineering tool that costs $1129. The modified version removes the copy protection and allows you to use the tool without paying or registering. The modification was done by a keygen, which is a program that generates valid serial numbers or license keys for software products. The keygen was released by an unknown source.

          -

          What are the features of Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen?

          -

          Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen has the same features as the original Hex-Rays Ida Pro Advanced Edition v6.1.1, which include:

          -
            -
          • A multi-processor disassembler that supports various architectures such as x86, x64, ARM, MIPS, PowerPC and more.
          • -
          • A graphical debugger that supports various platforms such as Windows, Linux, Mac OS X and more.
          • -
          • A hex editor that allows you to view and edit binary files in hexadecimal or ASCII format.
          • -
          • A decompiler that converts binary code into pseudo-code that is easier to understand and modify.
          • -
          • A plugin system that allows you to extend the functionality of the tool with custom scripts or modules.
          • -
          • A user interface that is customizable and user-friendly.
          • -
          • A documentation that is comprehensive and helpful.
          • -
          -

          How to get Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen for free?

          -

          If you want to get Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen for free, you will need to download it from a website that hosts the file. You will also need a program such as WinRAR or 7-Zip to extract the file. Here are the steps to follow:

          -
            -
          1. Go to a website that has Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen available for download. Some examples are TheOverWeb.com, OpenSea.io and News7Haridwar.com.
          2. -
          3. Search for Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen using the search bar or browse the categories.
          4. -
          5. Select the file that has the most downloads and ratings.
          6. -
          7. Click on the download button or link to start downloading the file.
          8. -
          9. Wait for the download to finish. The file size is about 100 MB.
          10. -
          11. Extract the file using a program such as WinRAR or 7-Zip.
          12. -
          13. Follow the instructions in the readme.txt file to install and activate the tool.
          14. -
          15. Enjoy using Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen for free!
          16. -
          -

          Conclusion

          -

          Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen is a powerful and versatile reverse engineering tool that can help you with analyzing, debugging and modifying binary files. However, it is also a cracked version of a premium product that may have some drawbacks and risks associated with it. Therefore, we do not recommend or endorse using Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen. If you like the tool and want to support the developers, you should buy the original version from their official website.

          -

          -

          How to use Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen with your files?

          -

          Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen can be used to analyze, debug and modify any binary file that you have on your computer or that you download from the internet. To use Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen with your files, you need to follow these steps:

          -
            -
          1. Make sure you have installed and activated Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen on your computer.
          2. -
          3. Launch Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen and select File > Open or drag and drop the file that you want to process into the tool.
          4. -
          5. Wait for the tool to load and analyze the file. You will see a disassembly window that shows the binary code in assembly language, a hex view window that shows the binary code in hexadecimal format, and a decompiler window that shows the binary code in pseudo-code format.
          6. -
          7. Use the tool's features to explore, debug and modify the file as you wish. You can use the toolbar, the menu, the keyboard shortcuts or the right-click menu to access the various features of the tool.
          8. -
          9. Save your changes to the file by selecting File > Save or Save As.
          10. -
          11. Enjoy using Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen with your files!
          12. -
          -

          How to learn more about Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen?

          -

          If you want to learn more about Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen and how to use it effectively, you can use the following resources:

          -
            -
          • The documentation that comes with the tool, which can be accessed by selecting Help > Contents or pressing F1.
          • -
          • The official website of Hex-Rays, which contains information about the tool, its features, its updates and its support.
          • -
          • The official blog of Hex-Rays, which contains news, tips, tricks and tutorials about the tool and reverse engineering in general.
          • -
          • The official forum of Hex-Rays, which contains discussions, questions and answers about the tool and reverse engineering in general.
          • -
          • The official YouTube channel of Hex-Rays, which contains videos and demos about the tool and reverse engineering in general.
          • -
          • The unofficial wiki of Hex-Rays, which contains user-generated content about the tool and reverse engineering in general.
          • -
          -

          How to troubleshoot Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen?

          -

          Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen is a cracked version of a premium product that may not work properly or may cause compatibility issues with your system or other software. If you encounter any problems while using Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen, you can try the following solutions:

          -
            -
          • Make sure you have downloaded and installed Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen from a reliable source and that the file is not corrupted or infected.
          • -
          • Make sure you have followed the instructions in the readme.txt file to install and activate Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen correctly.
          • -
          • Make sure you have the latest version of Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen and that it is compatible with your operating system and your DAW.
          • -
          • Make sure you have enough disk space, memory and CPU power to run Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen smoothly.
          • -
          • Make sure you have closed or disabled any other programs or processes that may interfere with Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen.
          • -
          • Make sure you have updated your drivers, libraries and frameworks that may affect Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen.
          • -
          • Make sure you have backed up your files and settings before using Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen and that you have a restore point or a recovery option in case something goes wrong.
          • -
          -

          How to uninstall Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen?

          -

          If you want to uninstall Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen from your computer, you can follow these steps:

          -
            -
          1. Close Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen and any other programs that may use it.
          2. -
          3. Go to the folder where you have installed Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen and delete it.
          4. -
          5. Go to the folder where you have extracted the keygen and delete it.
          6. -
          7. Go to the folder where you have saved your files and settings related to Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen and delete them.
          8. -
          9. Go to the Start menu and select Control Panel > Programs > Programs and Features.
          10. -
          11. Find Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen in the list of installed programs and click on Uninstall.
          12. -
          13. Follow the instructions to complete the uninstallation process.
          14. -
          15. Restart your computer to remove any traces of Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen.
          16. -
          -

          How to compare Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen with other reverse engineering tools?

          -

          Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen is one of the most powerful and popular reverse engineering tools on the market, but it is not the only one. There are other tools that can perform similar or different tasks related to reverse engineering and cracking software. Some of the most common ones are:

          -
            -
          • Ghidra: A free and open source reverse engineering tool developed by the National Security Agency (NSA). It has a graphical user interface, a disassembler, a decompiler, a debugger and a plugin system.
          • -
          • OllyDbg: A free and widely used debugger for Windows applications. It has a graphical user interface, a disassembler, a memory viewer, a code analyzer and a plugin system.
          • -
          • Radare2: A free and open source reverse engineering framework that can be used as a command-line tool or a graphical user interface. It has a disassembler, a debugger, a hex editor, a decompiler and a plugin system.
          • -
          • Hopper: A commercial reverse engineering tool for Mac OS X and Linux applications. It has a graphical user interface, a disassembler, a decompiler, a hex editor and a plugin system.
          • -
          -

          To compare Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen with other reverse engineering tools, you can use the following criteria:

          -
            -
          • The price: Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen is free, but it is illegal and unethical to use it. The original version costs $1129. Ghidra and Radare2 are free and open source. OllyDbg is free but closed source. Hopper costs $99.
          • -
          • The features: Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen has over 100 features that cover all aspects of reverse engineering and cracking software. Ghidra, Radare2 and Hopper have similar features but may lack some of the advanced or unique ones that Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen has. OllyDbg has fewer features but is more focused on debugging.
          • -
          • The compatibility: Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen supports various architectures and platforms such as x86, x64, ARM, MIPS, PowerPC, Windows, Linux, Mac OS X and more. Ghidra and Radare2 also support various architectures and platforms but may have some limitations or bugs. OllyDbg only supports x86 and Windows applications. Hopper only supports Mac OS X and Linux applications.
          • -
          • The usability: Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen has a customizable and user-friendly user interface that can be used by beginners and experts alike. Ghidra and Hopper also have graphical user interfaces that are easy to use but may not be as customizable as Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen's one. Radare2 has a command-line interface that can be more powerful but also more difficult to use for some users. OllyDbg has a graphical user interface that is simple but also outdated.
          • -
          -

          How to stay safe while using Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen?

          -

          Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen is a cracked version of a premium product that may expose you to viruses, malware and legal issues while using it. Therefore, you should be careful and take some precautions while using Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen:

          -
            -
          • Do not download Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen from untrusted sources or websites that may contain malicious files or links.
          • -
          • Do not open or run Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen without scanning it with an antivirus or anti-malware program first.
          • -
          • Do not use Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen on your main computer or on a computer that contains sensitive or personal data.
          • -
          • Do not use Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen on a computer that is connected to the internet or to a network that may be monitored or traced.
          • -
          • Do not use Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen to crack or modify software that belongs to someone else or that may violate their rights or laws.
          • -
          • Do not share Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen with anyone else or upload it to any online platform that may expose you or others to risks.
          • -
          -

          Conclusion

          -

          Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen is a powerful and versatile reverse engineering tool that can help you with analyzing, debugging and modifying binary files. However, it is also a cracked version of a premium product that may have some drawbacks and risks associated with it. Therefore, we do not recommend or endorse using Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen. If you like the tool and want to support the developers, you should buy the original version from their official website. Alternatively, you can also listen to Hex-Rays Ida Pro Advanced Edition v6.1.1 PreCracked Keygen on SoundCloud or buy and sell it as an NFT on OpenSea if you are interested in these platforms.

          3cee63e6c2
          -
          -
          \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/AutoCAD 2016 Crack Download - 64 Bit Edition - No Survey Required.md b/spaces/fatiXbelha/sd/AutoCAD 2016 Crack Download - 64 Bit Edition - No Survey Required.md deleted file mode 100644 index 1dd346bc4276cbc6e04ae1f322e73624d166e932..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/AutoCAD 2016 Crack Download - 64 Bit Edition - No Survey Required.md +++ /dev/null @@ -1,163 +0,0 @@ -
          -

          Download Crack AutoCAD 2016 64 Bit Free: Is It Worth It?

          -

          AutoCAD is one of the most popular and powerful CAD software in the world, used by millions of professionals and students for designing, drafting, modeling, and engineering. The latest version of AutoCAD, released in 2015, is AutoCAD 2016, which offers many new features and enhancements to improve productivity and creativity. However, AutoCAD 2016 is not cheap, and many users may be tempted to download a crack version of it for free. But is it worth it? In this article, we will explore what AutoCAD 2016 is, what a crack is, how it works, what are the risks and disadvantages of using a crack, and what are the benefits of using genuine AutoCAD 2016. By the end of this article, you will be able to make an informed decision on whether to download crack AutoCAD 2016 64 bit free or not.

          -

          download crack autocad 2016 64 bit free


          DOWNLOAD ✦✦✦ https://urllie.com/2uNCav



          -

          What is AutoCAD 2016 and what are its features?

          -

          AutoCAD 2016 is the latest version of the industry-leading CAD software developed by Autodesk. It is used for creating precise 2D and 3D drawings using specialized toolsets and over 750,000 intelligent objects. AutoCAD 2016 can be used for various purposes, such as architecture, engineering, construction, manufacturing, design, animation, and more. It can also integrate with other Autodesk products, such as Revit, Inventor, Maya, and Fusion 360.

          -

          AutoCAD 2016 overview

          -

          AutoCAD 2016 has a sleek and versatile user interface that allows users to customize their workspace according to their preferences. It also has a new graphics engine that improves the display quality and performance of drawings. AutoCAD 2016 supports both Windows and Mac operating systems, as well as cloud and mobile platforms. Users can access their files from anywhere using Autodesk A360 cloud service or Autodesk AutoCAD mobile app.

          -

          AutoCAD 2016 new features

          -

          AutoCAD 2016 introduces many new features and improvements that enhance the functionality and usability of the software. Some of the most notable ones are:

          -

          How to download and install Office 2016 for Windows 10
          -Office 2016 product key and installation guide
          -Microsoft Office 2016 32-bit and 64-bit free download link
          -Office 2016 vs Office 365: which one to choose
          -How to get Office 2016 for free with Microsoft Workplace Discount Program
          -Office 2016 system requirements and compatibility
          -How to activate Office 2016 without a product key
          -Office 2016 language pack and offline installer options
          -How to upgrade from Office 2013 to Office 2016
          -How to uninstall Office 2016 from Windows 10
          -Office 2016 tips and tricks for beginners
          -How to fix common Office 2016 errors and issues
          -How to use Office 2016 online and offline modes
          -How to update Office 2016 to the latest version
          -How to backup and restore Office 2016 settings and files
          -How to customize Office 2016 themes and interface
          -How to sync Office 2016 with OneDrive and other cloud services
          -How to share and collaborate with Office 2016 apps
          -How to secure and protect your Office 2016 data
          -How to optimize Office 2016 performance and speed
          -How to integrate Office 2016 with other Microsoft products and services
          -How to access Office 2016 help and support resources
          -How to learn new features and functions of Office 2016
          -How to create and edit documents with Word 2016
          -How to manage data and calculations with Excel 2016
          -How to design and present slideshows with PowerPoint 2016
          -How to organize and communicate with Outlook 2016
          -How to store and access notes with OneNote 2016
          -How to publish and print documents with Publisher 2016
          -How to create and edit databases with Access 2016
          -How to draw and annotate with Visio 2016
          -How to manage projects and tasks with Project 2016
          -How to edit PDFs and images with Acrobat Pro DC and Photoshop CC (included in some versions of Office 2016)
          -How to compare Office 2016 editions and prices
          -How to buy Office 2016 from Microsoft or other retailers
          -How to get a refund or exchange for Office 2016
          -How to renew or cancel your Office subscription (if applicable)
          -How to transfer your Office license to another device or user
          -How to contact Microsoft customer service for Office-related inquiries
          -How to join the Microsoft Insider Program for early access to new features of Office
          -How to troubleshoot Office activation problems on Windows 10
          -How to install Office on multiple devices with the same account or license
          -How to switch between the different views of Office apps (such as Read Mode, Print Layout, etc.)
          -How to use the Tell Me feature of Office apps (a search box that helps you find commands quickly)
          -How to use the Smart Lookup feature of Office apps (a tool that provides relevant information from the web based on your selection)
          -How to use the Quick Access Toolbar of Office apps (a customizable toolbar that lets you access frequently used commands)
          -How to use the Ribbon of Office apps (a tabbed interface that organizes commands by categories)
          -How to use the Backstage View of Office apps (a menu that lets you access file-related options such as Save, Open, Share, etc.)
          -How to use the Status Bar of Office apps (a horizontal bar at the bottom of the window that displays information such as page number, word count, zoom level, etc.)

          -
            -
          • Smart Dimensioning: This feature allows users to create accurate dimensions automatically based on the type of object they select. Users can also preview the dimensions before placing them and adjust them easily.
          • -
          • Enhanced PDFs: This feature allows users to create high-quality PDF files from their drawings that include hyperlinks, bookmarks, searchable text, and layers. Users can also import PDF files into AutoCAD as underlays or reference them in their drawings.
          • -
          • Revision Clouds: This feature allows users to create revision clouds more easily and edit them more flexibly. Users can also change the shape, size, and style of revision clouds.
          • -
          • Xref Enhancements: This feature allows users to manage external references more efficiently and control their properties more precisely. Users can also override the properties of xrefs by layer.
          • -
          • System Variable Monitor: This feature allows users to monitor the changes in system variables that affect their drawing environment. Users can also restore the system variables to their preferred values.
          • -
          • Mtext Enhancements: This feature allows users to create multiline text more easily and format it more quickly. Users can also wrap text around objects or along paths.
          • -
          • Reality Computing: This feature allows users to work with point clouds more effectively and integrate them with their drawings. Users can also attach point clouds from reality capture software, such as Autodesk ReCap.
          • -
          -

          AutoCAD 2016 system requirements

          -

          To run AutoCAD 2016 smoothly and efficiently, users need to have a computer that meets the minimum or recommended system requirements. The following table shows the system requirements for AutoCAD 2016 for Windows and Mac operating systems.

          - - - - - - - - - - - - - - - - -
          Operating SystemMinimum RequirementsRecommended Requirements
          Windows 10/8.1/7 (64-bit)
          • Intel Pentium 4 or AMD Athlon 64 processor
          • 2 GB RAM
          • 6 GB free disk space
          • 1360 x 768 display resolution with True Color
          • Windows display adapter capable of DirectX 9 or higher
          • .NET Framework 4.6
          • Intel Core i5 or higher processor
          • 8 GB RAM or more
          • 6 GB free disk space with at least 4 GB for installation
          • 1920 x 1080 display resolution with True Color
          • Windows display adapter capable of DirectX 11 or higher
          • .NET Framework 4.6 or later
          Mac OS X 10.10/10.11/10.12 (64-bit)
          • Apple Mac Pro, MacBook Pro, iMac, or Mac mini with Intel Core i5 or higher processor
          • 3 GB RAM
          • 3 GB free disk space
          • 1280 x 800 display resolution with True Color (2880 x 1800 with Retina Display recommended)
          • Apple Mouse, Apple Magic Mouse, Magic Trackpad, MacBook Pro trackpad, or Microsoft-compliant mouse
          • Apple Mac Pro, MacBook Pro, iMac, or Mac mini with Intel Core i7 or higher processor
          • 8 GB RAM or more
          • 3 GB free disk space with at least 2 GB for installation
          • 2880 x 1800 display resolution with Retina Display
          • Apple Mouse, Apple Magic Mouse, Magic Trackpad, MacBook Pro trackpad, or Microsoft-compliant mouse
          -

          What is a crack and how does it work?

          -

          A crack is a software program that modifies or bypasses the security features of another software program, such as a license key, activation code, or digital signature. A crack is usually created by hackers or crackers who want to use the software without paying for it or following its terms and conditions. A crack can also be used to remove unwanted features or restrictions from a software program, such as ads, trial periods, or watermarks.

          -

          Definition and types of cracks

          -

          A crack can be defined as a type of patch that alters the original code of a software program to make it work differently. There are different types of cracks, such as:

          -
            -
          • Keygen: This is a crack that generates a valid license key or serial number for a software program. A keygen can be used to activate the software without contacting the vendor or using an online verification system.
          • -
          • Patch: This is a crack that modifies the executable file of a software program to change its behavior. A patch can be used to remove the license verification process, disable the online update feature, or unlock some hidden features.
          • -
          • Loader: This is a crack that replaces the original executable file of a software program with a modified one. A loader can be used to run the software without installing it, bypassing the installation process and the license agreement.
          • -
          • DLL: This is a crack that replaces one or more dynamic link library files of a software program with modified ones. A DLL can be used to alter the functionality of the software or fix some bugs or errors.
          • -
          • NFO: This is a crack that provides information about the software program and how to use the crack. An NFO can be viewed using a text editor or a special viewer program.
          • -
          • RAR: This is a crack that compresses one or more files of a software program into a single archive file. A RAR can be used to reduce the size of the software and make it easier to download and share.
          • -
          • Torrent: This is a crack that distributes one or more files of a software program through a peer-to-peer network. A torrent can be used to download the software faster and from multiple sources.
          • -
          -

          How to use a crack to activate AutoCAD 2016

          -

          To use a crack to activate AutoCAD 2016, users need to follow these steps:

          -
            -
          1. Download the crack file from a reliable source, such as a torrent site or a file-sharing platform. Make sure the crack file is compatible with the version and architecture of AutoCAD 2016 (64 bit).
          2. -
          3. Extract the crack file using a compression tool, such as WinRAR or 7-Zip. Read the NFO file for instructions and information about the crack.
          4. -
          5. Disable the antivirus software and the firewall on the computer, as they may interfere with the crack or detect it as malware.
          6. -
          7. Install AutoCAD 2016 on the computer, using the trial version or a fake license key. Do not run the software after installation.
          8. -
          9. Copy and paste the crack file into the installation folder of AutoCAD 2016, usually located at C:\Program Files\Autodesk\AutoCAD 2016. Replace the original file if prompted.
          10. -
          11. Run the crack file as administrator and follow the instructions on the screen. The crack file may generate a new license key, patch the executable file, or load a modified version of AutoCAD 2016.
          12. -
          13. Restart the computer and run AutoCAD 2016. The software should be activated and ready to use.
          14. -
          -

          Risks and disadvantages of using a crack

          -

          While using a crack may seem like an easy and cheap way to get AutoCAD 2016, it also comes with many risks and disadvantages that users should be aware of. Some of them are:

          -
            -
          • Legal issues: Using a crack is illegal and violates the intellectual property rights of Autodesk. Users who use a crack may face legal actions, such as fines, lawsuits, or criminal charges, from Autodesk or other authorities.
          • -
          • Malware infection: Using a crack may expose the computer to malware, such as viruses, trojans, worms, spyware, or ransomware. These malware can damage the computer, steal personal information, encrypt files, or demand money.
          • -
          • Poor performance: Using a crack may affect the performance and stability of AutoCAD 2016 and the computer. The crack may cause errors, crashes, freezes, or glitches in the software or the system.
          • -
          • Limited functionality: Using a crack may limit the functionality and features of AutoCAD 2016. The crack may disable some features, such as online updates, cloud services, collaboration tools, or customization options.
          • -
          • No support: Using a crack may prevent users from getting support and assistance from Autodesk or other sources. Users who use a crack may not be able to access online help, forums, tutorials, or customer service.
          • -
          -

          What are the benefits of using genuine AutoCAD 2016?

          -

          On the other hand, using genuine AutoCAD 2016 has many benefits that outweigh the costs and inconveniences of buying it. Some of these benefits are:

          -
            -
          • Better performance and stability: Using genuine AutoCAD 2016 ensures that users get the best performance and stability from the software and the computer. Users can enjoy faster and smoother operations without any errors or interruptions.
          • -
          • More security and protection: Using genuine AutoCAD 2016 protects users from malware and other threats that may harm their computer or data. Users can also benefit from Autodesk's security features, such as encryption, authentication, or backup.
          • -
          • More flexibility and affordability: Using genuine AutoCAD 2016 gives users more flexibility and affordability in choosing how to use and pay for the software. Users can choose from different subscription plans, such as monthly, yearly, or multi-year plans, that suit their needs and budget. Users can also switch between different versions or products of Autodesk without losing their data or settings.
          • -
          • More support and updates: Using genuine AutoCAD 2016 enables users to get support and updates from Autodesk and other sources. Users can access online help, forums, tutorials, customer service, and feedback channels. Users can also receive regular updates that improve the functionality and usability of AutoCAD 2016.
          • -
          -

          Conclusion

          -

          In conclusion, downloading crack AutoCAD 2016 64 bit free may seem like a tempting option for users who want to save money or avoid hassle. However, it also comes with many risks and disadvantages that outweigh the benefits. Users who use a crack may face legal issues, malware infection, poor performance, limited functionality, and no support. On the other hand, using genuine AutoCAD 2016 has many benefits that justify the investment. Users who use genuine AutoCAD 2016 can enjoy better performance and stability, more security and protection, more flexibility and affordability, and more support and updates. Therefore, we recommend users to buy genuine AutoCAD 2016 from Autodesk or its authorized resellers and avoid downloading crack AutoCAD 2016 64 bit free.

          -

          FAQs

          -

          Here are some frequently asked questions about downloading crack AutoCAD 2016 64 bit free:

          -
            -
          1. Q: How can I tell if my AutoCAD 2016 is genuine or cracked?
          2. -
          3. A: One way to tell if your AutoCAD 2016 is genuine or cracked is to check the license information in the software. To do this, go to Help > About AutoCAD > Product Information > Manage License. If your AutoCAD 2016 is genuine, you will see your subscription details and expiration date. If your AutoCAD 2016 is cracked, you will see a message saying "Your license is invalid" or "Your license has expired". Another way to tell if your AutoCAD 2016 is genuine or cracked is to check the file properties of the executable file. To do this, right-click on the file and select Properties > Details. If your AutoCAD 2016 is genuine, you will see the file version, product name, product version, and company name as Autodesk. If your AutoCAD 2016 is cracked, you will see different or missing information.
          4. -
          5. Q: How can I get genuine AutoCAD 2016 for free or at a lower price?
          6. -
          7. A: There are some ways to get genuine AutoCAD 2016 for free or at a lower price legally and ethically. Some of them are:
          8. -
              -
            • Educational license: If you are a student or an educator, you can get a free educational license of AutoCAD 2016 from Autodesk for up to three years. To do this, you need to register with your academic email address and verify your eligibility.
            • -
            • Trial version: If you want to try out AutoCAD 2016 before buying it, you can download a free trial version of it from Autodesk for up to 30 days. To do this, you need to create an Autodesk account and provide some basic information.
            • -
            • Promotional offer: If you want to buy AutoCAD 2016 at a lower price, you can look for promotional offers from Autodesk or its authorized resellers. These offers may include discounts, coupons, bundles, or trade-ins.
            • -
            -
          9. Q: How can I update my AutoCAD 2016 to the latest version?
          10. -
          11. A: If you have a genuine AutoCAD 2016 subscription, you can update your software to the latest version automatically or manually. To update your software automatically, go to Application Menu > Options > System > General Options and check the box for "Automatically check for updates". To update your software manually, go to Application Menu > Check for Updates and follow the instructions on the screen.
          12. -
          13. Q: How can I contact Autodesk for support or feedback?
          14. -
          15. A: If you have a genuine AutoCAD 2016 subscription, you can contact Autodesk for support or feedback through various channels. Some of them are:
          16. -
              -
            • Online help: You can access online help by pressing F1 on your keyboard or clicking on the question mark icon on the top right corner of the software. You can also visit the Autodesk Knowledge Network website for more resources and articles.
            • -
            • Forums: You can join the Autodesk Community forums and interact with other users and experts. You can ask questions, share tips, give feedback, or report issues.
            • -
            • Customer service: You can contact Autodesk customer service by phone, email, chat, or web form. You can also request a callback or schedule an appointment.
            • -
            -
          17. Q: How can I uninstall AutoCAD 2016 from my computer?
          18. -
          19. A: To uninstall AutoCAD 2016 from your computer, follow these steps:
          20. -
              -
            1. Close any running instances of AutoCAD 2016 or other Autodesk products.
            2. -
            3. Go to Control Panel > Programs > Programs and Features and select AutoCAD 2016 from the list of installed programs.
            4. -
            5. Click on Uninstall/Change and follow the instructions on the screen.
            6. -
            7. Restart your computer and delete any remaining files or folders related to AutoCAD 2016.
            8. -
            -
          -

          I hope this article has helped you understand the pros and cons of downloading crack AutoCAD 2016 64 bit free and the benefits of using genuine AutoCAD 2016. If you have any questions or comments, please feel free to leave them below. Thank you for reading and happy designing!

          401be4b1e0
          -
          -
          \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Crafting and Building APK The Best Building Game of 2023.md b/spaces/fatiXbelha/sd/Crafting and Building APK The Best Building Game of 2023.md deleted file mode 100644 index ab2c6bfadf7361144375484c7eef9768192df0b3..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Crafting and Building APK The Best Building Game of 2023.md +++ /dev/null @@ -1,97 +0,0 @@ - -

          Crafting and Building APK Latest Version: A Review

          -

          If you are looking for a fun and creative game that lets you build anything you want, then you might want to check out Crafting and Building, a free building game for Android devices. In this article, we will review the latest version of the game, its features, how to download and install it, and its pros and cons.

          -

          crafting and building apk latest version


          Download Ziphttps://urllie.com/2uNGrl



          -

          What is Crafting and Building?

          -

          Crafting and Building is a new free building game that was released in 2020 by GeneRe, a developer based in Turkey. The game is inspired by popular sandbox games like Minecraft, Terraria, and Roblox, but it has its own unique style and gameplay.

          -

          A free building game for the whole family

          -

          Crafting and Building is a game that can be enjoyed by anyone, regardless of age or gender. The game is suitable for kids, boys, girls, and adults who love to create and explore. The game has a simple and intuitive interface that makes it easy to learn and play. You can start building your own house, castle, or mine in minutes, or you can decorate your house with furniture, paintings, and plants. You can also craft various items, such as tools, weapons, armor, food, and potions.

          -

          A sandbox game with exploration, multiplayer, and customization

          -

          Crafting and Building is not just a building game, but also a sandbox game that offers endless possibilities. You can explore a vast world that is randomly generated every time you start a new game. You can discover different biomes, such as forests, deserts, mountains, oceans, caves, and dungeons. You can also encounter various animals, such as dogs, cats, horses, cows, sheep, pigs, chickens, fish, and even dragons. You can tame some of them as pets or ride them as mounts.

          -

          crafting and building game free download apk
          -crafting and building 2.5.19.85 apk xapk
          -crafting and building android game latest version
          -crafting and building apk for android 4.4
          -crafting and building apk for pc windows
          -crafting and building apk mod unlimited money
          -crafting and building app download latest version
          -crafting and building by genere apk
          -crafting and building com.mmircil.cnb2 apk
          -crafting and building download apk 2023
          -crafting and building free game for android
          -crafting and building fun with friends apk
          -crafting and building install apk from apkcombo
          -crafting and building latest update apk
          -crafting and building multiplayer mode apk
          -crafting and building new free building game apk
          -crafting and building online game apk
          -crafting and building pixel graphics apk
          -crafting and building play with pets apk
          -crafting and building simulation games apk
          -crafting and building unlimited resources apk
          -download crafting and building 2.5.19.83 apk xapk
          -download crafting and building 2.5.19.82 apk xapk
          -download crafting and building adventure game apk
          -download crafting and building android tv & tablet apk
          -download crafting and building app from mmarcel.com
          -download crafting and building for android 5, 6, 7, 8, 9, 10, 11, 12
          -download crafting and building from new scientist website
          -download crafting and building game of 2020 apk
          -how to build your house in a castle or in a mine in crafting and building apk
          -how to decorate your house with your mates' furniture in crafting and building apk
          -how to download crafting and building apk (387 mb)
          -how to play with villagers and animals in crafting and building apk
          -how to start an incredible construction in crafting and building apk
          -how to visit the world that was built by your friends in crafting and building apk
          -learn how to craft your own constructions in crafting and building apk
          -learn more about the korea superconducting tokamak advanced research experiment in crafting and building apk
          -play with your dogs, mouse, or horse in crafting and building apk
          -the best pixel graphics with high fps in crafting and building apk
          -the coolest game for the whole family: boys and girls will love it in crafting and building apk

          -

          Another feature of Crafting and Building is the multiplayer mode, which allows you to play with your friends online. You can join or create a server where you can chat, cooperate, or compete with other players. You can visit their worlds or invite them to yours. You can also play mini-games together, such as parkour, racing, or PvP battles.

          -

          Finally, Crafting and Building lets you customize your character's appearance. You can choose from different skins or create your own using the skin editor. You can also change your clothes, hair style, eyes color, and accessories.

          -

          What's new in the latest version?

          -

          The latest version of Crafting and Building is 2.5.19.85 , which was updated on April 19th , 2023 . The update brings some new features and improvements , as well as bug fixes and performance enhancements . Here are some of the highlights:

          -

          New features and improvements

          -
            -

            Bug fixes and performance enhancements

            -

            The latest version of Crafting and Building also fixes some bugs and improves the performance of the game. Some of the bug fixes include:

            -
              -- Fixed the issue of the game crashing on some devices - Fixed the issue of the game not loading properly on some devices - Fixed the issue of the game freezing or lagging on some devices - Fixed the issue of the game not saving the progress on some devices - Fixed the issue of the game not connecting to the server on some devices
            -

            Some of the performance enhancements include:

            -
              -- Optimized the game's memory usage and battery consumption - Optimized the game's loading speed and network stability - Optimized the game's graphics quality and resolution - Optimized the game's user interface and controls

            How to download and install Crafting and Building APK?

            -

            If you want to try out the latest version of Crafting and Building, you will need to download and install the APK file on your Android device. An APK file is a package that contains all the files and data needed to run an app. However, you will not find the APK file on the Google Play Store, as the game is not officially available there. Instead, you will need to download it from a trusted source, such as APKCombo or APKPure. Here are the steps to download and install Crafting and Building APK:

            -

            Download from a trusted source

            -

            The first step is to find a reliable website that offers the Crafting and Building APK file. You can use your browser to search for it, or you can use the links provided below. Make sure that the website is secure and has positive reviews from other users. Avoid downloading from unknown or suspicious sources, as they may contain malware or viruses that can harm your device.

            -

            Once you find a trustworthy website, look for the download button or link for the Crafting and Building APK file. It should be the latest version, which is 2.5.19.85 as of this writing. Tap on the download button or link, and wait for the file to be downloaded to your device. You may need to grant some permissions or accept some terms and conditions before the download starts.

            -

            Enable unknown sources on your device

            -

            The next step is to enable unknown sources on your device, which will allow you to install apps from sources other than the Google Play Store. To do this, go to your device's settings, and look for the security or privacy option. Tap on it, and find the option that says unknown sources or allow installation of apps from unknown sources. Toggle it on, and confirm your choice if prompted.

            -

            Note that this option may vary depending on your device model and Android version. You may also need to enable it for each app that you want to install from an unknown source.

            -

            Install the APK file and enjoy

            -

            The final step is to install the Crafting and Building APK file that you downloaded earlier. To do this, go to your device's file manager, and locate the file in your downloads folder. Tap on it, and follow the instructions on the screen to complete the installation process. You may need to grant some permissions or accept some terms and conditions before the installation starts.

            -

            Once the installation is done, you can launch the game from your app drawer or home screen. You can now enjoy playing Crafting and Building with its latest features and improvements.

            Pros and cons of Crafting and Building APK

            -

            Crafting and Building APK is a great game for building enthusiasts, but it also has some drawbacks that you should be aware of. Here are some of the pros and cons of the game:

            -

            Pros

            -
              -- Fun and creative gameplay: Crafting and Building APK offers a fun and creative gameplay that lets you build anything you want, from houses and castles to mines and dungeons. You can also craft various items, such as tools, weapons, armor, food, and potions. You can explore a vast world that is randomly generated every time you start a new game. You can discover different biomes, such as forests, deserts, mountains, oceans, caves, and dungeons. You can also encounter various animals, such as dogs, cats, horses, cows, sheep, pigs, chickens, fish, and even dragons. You can tame some of them as pets or ride them as mounts. - Cool graphics and sound effects: Crafting and Building APK has a realistic and colorful 3D environment that will immerse you in the world of crafting and building. The game also has cool sound effects and music that will enhance your gameplay experience. The game has a wonderful HD graphics and animations that will make you feel like you are in a real world. - Free and offline mode available: Crafting and Building APK is a free game that does not require any registration or subscription. You can download and play the game without spending any money. The game also has an offline mode that allows you to play the game without an internet connection. You can enjoy the game anytime and anywhere you want.
            -

            Cons

            -
              -- Frequent ads and pop-ups: Crafting and Building APK is a free game, but it also comes with frequent ads and pop-ups that may interrupt your gameplay. The ads may appear on the screen or in the notification bar. The pop-ups may ask you to rate the game, watch a video, or download another app. The ads and pop-ups may be annoying or distracting for some players. - No cross-platform play or cloud save: Crafting and Building APK is only available for Android devices. You cannot play the game on other platforms, such as iOS or Windows. You also cannot sync your progress across different devices or accounts. You will lose your progress if you uninstall the game or change your device. - Some features require in-app purchases: Crafting and Building APK is a free game, but some features require in-app purchases. For example, you need to buy coins and gems to unlock some skins, clothes, accessories, or items. You also need to buy coins and gems to remove ads or pop-ups. The in-app purchases may be expensive or unnecessary for some players.

            Conclusion

            -

            Crafting and Building APK is a great game for building enthusiasts who want to unleash their creativity and imagination. The game offers a fun and creative gameplay that lets you build anything you want, from houses and castles to mines and dungeons. You can also craft various items, such as tools, weapons, armor, food, and potions. You can explore a vast world that is randomly generated every time you start a new game. You can discover different biomes, such as forests, deserts, mountains, oceans, caves, and dungeons. You can also encounter various animals, such as dogs, cats, horses, cows, sheep, pigs, chickens, fish, and even dragons. You can tame some of them as pets or ride them as mounts.

            -

            The game also has a multiplayer mode that allows you to play with your friends online. You can join or create a server where you can chat, cooperate, or compete with other players. You can also play mini-games together, such as parkour, racing, or PvP battles. The game also lets you customize your character's appearance. You can choose from different skins or create your own using the skin editor. You can also change your clothes, hair style, eyes color, and accessories.

            -

            However, the game also has some drawbacks that you should be aware of. The game has frequent ads and pop-ups that may interrupt your gameplay. The game is only available for Android devices and does not support cross-platform play or cloud save. The game also requires in-app purchases to unlock some features or remove ads.

            -

            Overall, Crafting and Building APK is a recommended download for Android users who love building games. The game is free and offline mode available , so you can enjoy it anytime and anywhere you want. The game has a wonderful HD graphics and animations that will make you feel like you are in a real world. The game has a simple and intuitive interface that makes it easy to learn and play. The game has a fun and creative gameplay that offers endless possibilities.

            -

            FAQs

            -

            Is Crafting and Building safe to download?

            -

            Crafting and Building is safe to download if you download it from a trusted source , such as APKCombo or APKPure. These websites scan the APK files for viruses and malware before uploading them. However, you should always be careful when downloading apps from unknown sources , as they may contain harmful or unwanted content. You should also check the permissions and reviews of the app before installing it.

            -

            Is Crafting and Building compatible with my device?

            -

            Crafting and Building is compatible with most Android devices that run on Android 4.1 or higher . However, some devices may not support the game due to hardware limitations or software issues. You can check the compatibility of the game with your device by visiting the website of the APK file or by reading the description of the app.

            -

            How can I play with my friends online?

            -

            To play with your friends online , you need to have an internet connection and join or create a server . You can join an existing server by entering its name or IP address , or you can create your own server by tapping on the create button . You can also invite your friends to your server by sharing its name or IP address . Once you are in a server , you can chat , cooperate , or compete with other players . You can also play mini-games together , such as parkour , racing , or PvP battles .

            -

            How can I change my character's skin?

            -

            To change your character's skin , you need to tap on the menu button on the top left corner of the screen , and then tap on the skin button . You can choose from different skins that are available in the game , or you can create your own skin using the skin editor . You can also change your clothes , hair style , eyes color , and accessories . To apply your changes , you need to tap on the save button .

            -

            How can I get more coins and gems?

            -

            To get more coins and gems , you need to watch ads or make in-app purchases . Coins and gems are used to unlock some features or items in the game , such as skins , clothes , accessories , or items . You can watch ads by tapping on the watch button on the top right corner of the screen . You can make in-app purchases by tapping on the shop button on the top right corner of the screen .

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download 247 MP3 - The Most Reliable and Secure Music Downloader.md b/spaces/fatiXbelha/sd/Download 247 MP3 - The Most Reliable and Secure Music Downloader.md deleted file mode 100644 index 3fed5a8624c1d4a402526153ce80ccfadb4ecf4f..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download 247 MP3 - The Most Reliable and Secure Music Downloader.md +++ /dev/null @@ -1,166 +0,0 @@ -
            -

            Download 247 MP3: How to Enjoy Free Music Anytime, Anywhere

            -

            Music is one of the most popular forms of entertainment and expression in the world. Whether you want to relax, energize, or inspire yourself, music can help you achieve your mood and goals. But how can you enjoy music without spending a lot of money or time? The answer is simple: download MP3 files online.

            -

            MP3 is a digital audio format that compresses sound data without losing much quality. It is widely used and supported by various devices and platforms, such as computers, smartphones, tablets, music players, and more. By downloading MP3 files online, you can access millions of songs for free or at a low cost, and listen to them offline whenever and wherever you want.

            -

            download 247 mp3


            Download File ✫✫✫ https://urllie.com/2uNIcd



            -

            But where can you find and download MP3 files online? There are many websites and apps that offer this service, but not all of them are reliable, safe, or easy to use. In this article, we will introduce you to one of the best websites for downloading MP3 files online: Download 247 MP3. We will also show you some other ways to download MP3 files online, and give you some tips and tricks for downloading and managing your MP3 files.

            -

            What is Download 247 MP3?

            -

            A brief introduction to the website and its features

            -

            Download 247 MP3 is a website that allows you to download MP3 files from YouTube videos for free. You can use it to convert any YouTube video into an MP3 file with high quality up to 320 kbps. You can also choose from different formats, such as M4A, AAC, FLAC, OGG, WMA, WAV, and more.

            -

            Download 247 MP3 has many features that make it stand out from other similar websites. Some of these features are:

            -
              -
            • It is easy to use, just 1 click and the video is ready for download.
            • -
            • It is fast and responsive, it can process your request in seconds.
            • -
            • It is unlimited and free, you can download as many videos as you want without any restrictions or fees.
            • -
            • It is compatible with all devices and browsers, you can use it on any device that has an internet connection and a web browser.
            • -
            • It is safe and secure, it does not require any registration or installation, and it does not store or share your personal information or data.
            • -
            -

            How to use Download 247 MP3 to convert YouTube videos

            -

            Using Download 247 MP3 to convert YouTube videos into MP3 files is very simple and straightforward. Here are the steps that you need to follow:

            -

            download 247 mp3 songs
            -download 247 mp3 music
            -download 247 mp3 converter
            -download 247 mp3 free
            -download 247 mp3 online
            -download 247 mp3 audio
            -download 247 mp3 video
            -download 247 mp3 youtube
            -download 247 mp3 shazam
            -download 247 mp3 mnqobi yazo
            -download 247 mp3 album
            -download 247 mp3 quality
            -download 247 mp3 downloader
            -download 247 mp3 player
            -download 247 mp3 app
            -download 247 mp3 best
            -download 247 mp3 fast
            -download 247 mp3 easy
            -download 247 mp3 unlimited
            -download 247 mp3 high
            -download 247 mp3 low
            -download 247 mp3 medium
            -download 247 mp3 bitrate
            -download 247 mp3 format
            -download 247 mp3 extension
            -download 247 mp3 file
            -download 247 mp3 size
            -download 247 mp3 length
            -download 247 mp3 duration
            -download 247 mp3 lyrics
            -download 247 mp3 genre
            -download 247 mp3 artist
            -download 247 mp3 title
            -download 247 mp3 track
            -download 247 mp3 playlist
            -download 247 mp3 mixtape
            -download 247 mp3 remix
            -download 247 mp3 cover
            -download 247 mp3 instrumental
            -download 247 mp3 karaoke
            -download 247 mp3 ringtone
            -download 247 mp3 notification
            -download 247 mp3 soundcloud
            -download 247 mp3 spotify
            -download 247 mp3 apple music
            -download 247 mp3 amazon music
            -download 247 mp3 deezer
            -download 247 mp3 tidal
            -download 247 mp3 pandora

            -
              -
            1. Go to the YouTube website or app and find the video that you want to download as an MP3 file.
            2. -
            3. Copy the URL (web address) of the video from the address bar or the share button.
            4. -
            5. Go to the Download 247 MP3 website ((https://www.download247mp3.com/)) and paste the URL into the search box.
            6. -
            7. Click on the "Convert" button and wait for a few seconds while the website converts the video into an MP3 file.
            8. -
            9. Choose the quality and format that you want for your MP3 file from the list of options.
            10. -
            11. Click on the "Download" button and save the MP3 file to your device or cloud storage.
            12. -
            -

            Congratulations, you have successfully downloaded an MP3 file from a YouTube video using Download 247 MP3. You can now enjoy listening to your favorite music offline anytime, anywhere.

            -

            Benefits of using Download 247 MP3

            -

            There are many benefits of using Download 247 MP3 to download MP3 files from YouTube videos. Some of these benefits are:

            -
              -
            • You can access a huge library of music from YouTube, which has millions of songs and videos from various genres, artists, and languages.
            • -
            • You can save money and time by downloading MP3 files for free instead of buying or streaming them online.
            • -
            • You can listen to your MP3 files offline without any internet connection or data usage.
            • -
            • You can customize your MP3 files according to your preferences, such as quality, format, name, and tags.
            • -
            • You can share your MP3 files with your friends and family via email, social media, or Bluetooth.
            • -
            -

            Other ways to download MP3 files online

            -

            Download 247 MP3 is not the only way to download MP3 files online. There are other websites and apps that offer similar or different services for downloading MP3 files online. Here are some of the most popular and reliable ones:

            -

            Best MP3 Converter: A fast and easy YouTube to MP3 converter

            -

            Features and advantages of Best MP3 Converter

            -

            Best MP3 Converter is another website that allows you to download MP3 files from YouTube videos for free. It has some features and advantages that make it different from Download 247 MP3. Some of these features and advantages are:

            -
              -
            • It is faster and easier than Download 247 MP3, it can convert and download a video in less than a minute.
            • -
            • It supports more video sources than Download 247 MP3, such as Facebook, Instagram, Vimeo, Dailymotion, and more.
            • -
            • It has more options for quality and format than Download 247 MP3, such as 64 kbps, 128 kbps, 192 kbps, 256 kbps, and 320 kbps for quality, and MP4, AVI, MKV, MOV, WMV, FLV, WEBM, and more for format.
            • -
            -

            How to use Best MP3 Converter to download MP3 files

            -

            Using Best MP3 Converter to download MP3 files from YouTube videos is also very simple and straightforward. Here are the steps that you need to follow:

            -
              -
            1. Go to the YouTube website or app and find the video that you want to download as an MP3 file.
            2. -
            3. Copy the URL (web address) of the video from the address bar or the share button.
            4. -
            5. Go to the Best MP3 Converter website ((https://www.bestmpconverter.com/)) and paste the URL into the search box.
            6. -
            7. Click on the "Convert" button and wait for a few seconds while the website converts the video into an MP3 file.
            8. -
            9. Choose the quality and format that you want for your MP3 file from the list of options.
            10. -
            11. Click on the "Download" button and save the MP3 file to your device or cloud storage.
            12. -
            -

            That's it, you have successfully downloaded an MP3 file from a YouTube video using Best MP3 Converter. You can now enjoy listening to your favorite music offline anytime, anywhere.

            -

            Shazam: A music discovery and recognition app

            -

            Features and advantages of Shazam

            -

            Shazam is not a website, but an app that allows you to discover and recognize music that is playing around you. It can also help you download MP3 files from the songs that you identify. It has some features and advantages that make it different from Download 247 MP3 and Best MP3 Converter. Some of these features and advantages are:

            -
              -
            • It is more than just a YouTube to MP3 converter, it is a music discovery and recognition app that can help you find new songs and artists that match your taste.
            • -
            • It can identify any song that is playing around you in seconds, even if it is in a noisy environment or has no lyrics.
            • -
            • It can show you the lyrics, artist, album, genre, and other information about the song that you identify.
            • -
            • It can connect you to various streaming services, such as Spotify, Apple Music, YouTube Music, Deezer, and more, where you can listen to or download the song that you identify.
            • -
            • It can create personalized playlists for you based on your Shazams and preferences.
            • -
            -

            How to use Shazam to identify and download songs

            -

            Using Shazam to identify and download songs is also very simple and straightforward. Here are the steps that you need to follow:

            -
              -
            1. Download and install the Shazam app on your device from the App Store or Google Play Store.
            2. -
            3. Open the app and tap on the Shazam button when you hear a song that you want to identify.
            4. -
            5. Wait for a few seconds while the app listens to and recognizes the song.
            6. -
            7. See the song information on your screen, such as title, artist, album, genre, lyrics, and more.
            8. -
            9. Tap on the streaming service icon that you prefer, such as Spotify, Apple Music, YouTube Music, Deezer, and more.
            10. -
            11. Listen to or download the song from the streaming service of your choice.
            12. -
            -

            Voila, you have successfully identified and downloaded a song using Shazam. You can now enjoy listening to your favorite music offline anytime, anywhere.

            -

            Tips and tricks for downloading and managing MP3 files

            -

            How to choose the best quality and format for your MP3 files

            -

            When downloading MP3 files online, you may encounter different options for quality and format. Quality refers to the bit rate or the amount of data that is encoded in each second of audio. Format refers to the type of file extension or container that holds the audio data. Both quality and format affect the size and sound of your MP3 files. Here are some tips on how to choose the best quality and format for your MP3 files:

            -
              -
            • The higher the quality, the better the sound, but also the larger the size. For example, a 320 kbps MP3 file will sound better than a 128 kbps MP3 file, but it will also take up more space on your device or cloud storage. Therefore, you should balance between quality and size according to your needs and preferences.
            • -
            • The most common format for MP3 files is .mp3, which is supported by most devices and platforms. However, there are other formats that may offer better sound quality or compatibility with certain devices or platforms. For example, .m4a is a format that is compatible with Apple devices and iTunes, .flac is a format that offers lossless compression without sacrificing sound quality, .ogg is a format that is compatible with Linux systems and Firefox browsers. Therefore, you should choose the format that suits your device or platform best.
            • -
            -

            How to organize and edit your MP3 files

            -

            After downloading MP3 files online, you may want to organize and edit them according to your preferences. Organizing your MP3 files means sorting them into folders or playlists based on criteria such as genre, artist, album, mood, or occasion. Editing your MP3 files means changing their name, tags, metadata, or artwork, such as adding, removing, or modifying them. Organizing and editing your MP3 files can help you manage them more easily and efficiently, and enhance your listening experience. Here are some tips on how to organize and edit your MP3 files:

            -
              -
            • Use a file manager app or software to create folders or playlists for your MP3 files based on criteria such as genre, artist, album, mood, or occasion. For example, you can create a folder or playlist for rock music, another one for pop music, another one for workout music, and so on. You can also create subfolders or subplaylists within each folder or playlist for more specific categories. For example, you can create a subfolder or subplaylist for rock music from the 80s, another one for rock music from the 90s, and so on.
            • -
            • Use an MP3 tag editor app or software to change the name, tags, metadata, or artwork of your MP3 files. For example, you can use (https://www.mp3tag.de/en/) to edit the title, artist, album, genre, year, track number, comment, lyrics, cover art, and more of your MP3 files. You can also use (https://www.audacityteam.org/) to edit the audio quality, volume, speed, pitch, fade in/out, and more of your MP3 files.
            • -
            -

            How to transfer and play your MP3 files on different devices

            -

            Once you have downloaded, organized, and edited your MP3 files, you may want to transfer and play them on different devices. Transferring your MP3 files means moving them from one device to another, such as from your computer to your smartphone, or from your smartphone to your music player. Playing your MP3 files means listening to them on different devices using different apps or software. Transferring and playing your MP3 files on different devices can help you enjoy your music more conveniently and flexibly. Here are some tips on how to transfer and play your MP3 files on different devices:

            -
              -
            • Use a USB cable or a wireless connection to transfer your MP3 files from one device to another. For example, you can use a USB cable to connect your computer and your smartphone, and then copy and paste your MP3 files from one device to another. You can also use a wireless connection such as Bluetooth or Wi-Fi to send and receive your MP3 files from one device to another.
            • -
            • Use a cloud storage service or an online music platform to store and stream your MP3 files from different devices. For example, you can use (https://www.dropbox.com/) to upload your MP3 files to the cloud and access them from any device that has an internet connection and a web browser. You can also use (https://soundcloud.com/) to upload your MP3 files to the online music platform and stream them from any device that has an internet connection and the SoundCloud app.
            • -
            • Use a compatible app or software to play your MP3 files on different devices. For example, you can use (https://www.vlc.org/) to play your MP3 files on any device that supports the VLC app or software, such as Windows, Mac, Linux, Android, iOS, and more. You can also use (https://www.spotify.com/) to play your MP3 files on any device that supports the Spotify app or software, such as Windows, Mac, Linux, Android, iOS, and more.
            • -
            -

            Conclusion

            -

            Downloading MP3 files online is a great way to enjoy free music anytime, anywhere. In this article, we have introduced you to one of the best websites for downloading MP3 files online: Download 247 MP3. We have also shown you some other ways to download MP3 files online, and given you some tips and tricks for downloading and managing your MP3 files. We hope that this article has been helpful and informative for you. Now, you can start downloading your favorite songs and listen to them offline whenever and wherever you want.

            -

            FAQs

            -

            Here are some frequently asked questions and answers about downloading MP3 files online:

            -
              -
            1. Is downloading MP3 files online legal?
            2. -

              Downloading MP3 files online is legal as long as you have the permission or the license from the original owner or the source of the music. However, downloading MP3 files online without the permission or the license from the original owner or the source of the music may be illegal and may violate the copyright laws in your country or region. Therefore, you should always check the terms and conditions of the website or the app that you use to download MP3 files online before doing so.

              -
            3. Is downloading MP3 files online safe?
            4. -

              Downloading MP3 files online is safe as long as you use a reliable, secure, and reputable website or app that does not contain any malware, viruses, spyware, or other harmful elements. However, downloading MP3 files online from an unreliable, insecure, or unknown website or app may be unsafe and may harm your device or data. Therefore, you should always scan the website or the app that you use to download MP3 files online with an antivirus or anti-malware software before doing so.

              -
            5. What are the best websites or apps for downloading MP3 files online?
            6. -

              The best websites or apps for downloading MP3 files online depend on your needs and preferences. However, some of the most popular and reliable ones are Download 247 MP3, Best MP3 Converter, Shazam, Dropbox, SoundCloud, VLC, and Spotify. You can use any of these websites or apps to download MP3 files online according to your criteria such as quality, format, speed, ease of use, compatibility, and more.

              -
            7. How can I download MP3 files online faster?
            8. -

              You can download MP3 files online faster by using a high-speed internet connection and a fast and responsive website or app. You can also download MP3 files online faster by choosing a lower quality or format for your MP3 files. However, this may affect the sound quality of your MP3 files. Therefore, you should balance between speed and quality according to your needs and preferences.

              -
            9. How can I download MP3 files online for free?
            10. -

              You can download MP3 files online for free by using a website or an app that does not charge any fees or require any registration or installation. For example, you can use Download 247 MP3 or Best MP3 Converter to download MP3 files from YouTube videos for free. You can also use Shazam to identify and download songs from various streaming services for free. However, you may need to pay for some premium features or subscriptions on some websites or apps.

              -

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Blackthorne The 16-bit Action Game that Started Blizzards Legacy.md b/spaces/fatiXbelha/sd/Download Blackthorne The 16-bit Action Game that Started Blizzards Legacy.md deleted file mode 100644 index fb022c332ae3c3c1d8db2cab29c0161ee6b17986..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Blackthorne The 16-bit Action Game that Started Blizzards Legacy.md +++ /dev/null @@ -1,81 +0,0 @@ - -

            Download Blackthorne: A Classic Platform Game by Blizzard

            -

            If you are a fan of platform games, you might have heard of Blackthorne, a cinematic platform game developed by Blizzard Entertainment in 1994. It is one of the earliest games by the famous studio, and it has a cult following among retro gamers. In this article, we will tell you what Blackthorne is, how to download it for free, and why you should play it.

            -

            download blackthorne


            Download File ✦✦✦ https://urllie.com/2uNAUb



            -

            What is Blackthorne?

            -

            Blackthorne is a platform game that combines action, puzzle, and stealth elements. It was released for the Super NES and MS-DOS in 1994, and later for the Sega 32X, Classic Mac OS, Game Boy Advance, Microsoft Windows, Xbox One, PlayStation 4, and Nintendo Switch. The cover art for the SNES version was drawn by Jim Lee, a famous comic book artist.

            -

            The story and setting of Blackthorne

            -

            Blackthorne is set on the planet Tuul, which has existed for centuries without human knowledge. Tuul's people have been ruled by a single shaman who had all knowledge. Years before the game begins, the shaman dies and splits his power into two stones, light and dark. He gives one to each of his sons, who form two kingdoms: Androth and Ka'dra'suul. The people of Androth respect their stone, while the people of Ka'dra'suul reject theirs and become corrupted by it. A ka'dra named Sarlac seizes power and invades Androth.

            -

            The king of Androth, Vlaros, sends his son Kyle to Earth with the lightstone to save his life. Kyle grows up to be a military captain and mercenary. Twenty years later, he is contacted by Galadril, an Androthi magician, who tells him to return to Tuul and save his people. Kyle sets out to kill Sarlac and reclaim his throne.

            -

            The gameplay and features of Blackthorne

            -

            Blackthorne is a side-scrolling platform game that requires the player to control Kyle through various levels, avoiding traps, solving puzzles, and fighting enemies. Kyle can run, jump, climb, duck, and shoot with his shotgun. He can also hide in the shadows to avoid detection or ambush foes. He can interact with friendly characters who may give him items or information.

            -

            How to download blackthorne for free
            -Download blackthorne classic game from Blizzard
            -Blackthorne download for Windows 10
            -Blackthorne PC game download full version
            -Download blackthorne 16-bit platformer
            -Blackthorne free download Battle.net
            -Download blackthorne Mac OS
            -Blackthorne game download for Android
            -Download blackthorne remastered
            -Blackthorne download size and system requirements
            -Download blackthorne soundtrack and cheats
            -Blackthorne game download for Linux
            -Download blackthorne online multiplayer
            -Blackthorne download for iOS devices
            -Download blackthorne SNES ROM
            -Blackthorne game download for PS4
            -Download blackthorne GOG version
            -Blackthorne download for Xbox One
            -Download blackthorne Steam edition
            -Blackthorne game download for Nintendo Switch
            -Download blackthorne DOS version
            -Blackthorne download for Sega Genesis
            -Download blackthorne 32-bit version
            -Blackthorne game download for Windows 7
            -Download blackthorne patch and update
            -Blackthorne download for Atari Jaguar
            -Download blackthorne mods and custom levels
            -Blackthorne game download for Windows 8.1
            -Download blackthorne speedrun and walkthrough
            -Blackthorne download for Amiga CD32
            -Download blackthorne emulator and controller support
            -Blackthorne game download for Windows XP
            -Download blackthorne fan remake and sequel
            -Blackthorne download for Game Boy Advance
            -Download blackthorne review and rating
            -Blackthorne game download for Windows Vista
            -Download blackthorne tips and tricks
            -Blackthorne download for PSP and PS Vita
            -Download blackthorne history and development
            -Blackthorne game download for Windows 98
            -Download blackthorne gameplay and graphics
            -Blackthorne download for Wii U and 3DS
            -Download blackthorne story and characters
            -Blackthorne game download for Windows ME
            -Download blackthorne comparison and alternatives
            -Blackthorne download for Oculus Rift and VR support
            -Download blackthorne achievements and trophies
            -Blackthorne game download for Windows 95
            -Download blackthorne FAQ and troubleshooting guide

            -

            The game has four different areas: Androth, Ka'dra'suul's mines, a desert wasteland, and Sarlac's stronghold. Each area has different enemies, obstacles, and secrets. The game also has different difficulty levels that affect the number of lives, enemies, and items available.

            -

            The legacy and reception of Blackthorne

            -

            Blackthorne was well received by critics and players when it was released. It was praised for its graphics, sound, animation, atmosphere, and gameplay. It was compared to other cinematic platform games like Prince of Persia and Flashback. It won several awards, such as Best Action/Adventure Game from Electronic Gaming Monthly and Best Platform Game from Computer Gaming World .

            -

            Blackthorne is considered one of the classic games by Blizzard Entertainment, along with Warcraft, Diablo, StarCraft, and Overwatch. It showcases some of the early talents of the studio, such as Frank Pearce Jr., Patrick Wyatt, Michael Morhaime (PC), Samwise Didier (SNES), Micky Neilson (story), Glenn Stafford (music), etc.

            Conclusion

            -

            Blackthorne is a classic platform game by Blizzard Entertainment that you can download for free and enjoy. It is a cinematic platform game that has a dark atmosphere, a challenging and varied gameplay, and a beautiful art and music. It is one of the earliest games by Blizzard, and it shows their talent and creativity. If you are looking for a platform game that will keep you entertained and engaged, you should give Blackthorne a try.

            -

            FAQs

            -

            Here are some of the frequently asked questions about Blackthorne:

            -
              -
            • Q: How long is Blackthorne?
            • -
            • A: Blackthorne has 17 levels in total, and it takes about 4 to 6 hours to complete the game, depending on your skill and difficulty level.
            • -
            • Q: Is Blackthorne related to Warcraft?
            • -
            • A: No, Blackthorne is not related to Warcraft, although they are both made by Blizzard Entertainment. However, there are some references and easter eggs to Blackthorne in Warcraft games, such as Kyle's appearance as an NPC in World of Warcraft.
            • -
            • Q: Is Blackthorne available for mobile devices?
            • -
            • A: No, Blackthorne is not available for mobile devices, although there are some unofficial ports and emulators that might work. However, we do not recommend downloading or using them, as they might be illegal or unsafe.
            • -
            • Q: Is there a sequel or remake of Blackthorne?
            • -
            • A: No, there is no official sequel or remake of Blackthorne, although there are some fan-made projects and mods that attempt to recreate or improve the game. However, we do not endorse or support them, as they might infringe Blizzard's intellectual property rights.
            • -
            • Q: Where can I find more information about Blackthorne?
            • -
            • A: You can find more information about Blackthorne on the official Blizzard website , the Wikipedia page , or the fan wiki . You can also watch some gameplay videos or reviews on YouTube or Twitch.
            • -

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download M Word The Best Word Processing Software for Your Needs.md b/spaces/fatiXbelha/sd/Download M Word The Best Word Processing Software for Your Needs.md deleted file mode 100644 index 78f5946ea9df807e4fd584f6f6a0546d871dd420..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download M Word The Best Word Processing Software for Your Needs.md +++ /dev/null @@ -1,202 +0,0 @@ - -

            How to Download Microsoft Word

            -

            Microsoft Word is one of the most popular and powerful word processing software in the world. It allows you to create, edit, and share documents with ease and professionalism. Whether you need to write a letter, a resume, a report, or a book, Microsoft Word can help you with its rich features and templates.

            -

            download m word


            Download File ✺✺✺ https://urllie.com/2uNAp9



            -

            But how can you download Microsoft Word on your device? In this article, we will show you three ways to download Microsoft Word: for free, with a subscription, or as a standalone product. Follow these steps and start using Microsoft Word today.

            -

            What is Microsoft Word?

            -

            Microsoft Word is part of the Microsoft Office suite of productivity software. It was first released in 1983 and has since evolved into a versatile and user-friendly application. With Microsoft Word, you can:

            -
              -
            • Create documents from scratch or use pre-made templates
            • -
            • Format text, images, tables, charts, and other elements
            • -
            • Check spelling, grammar, and readability with Microsoft Editor
            • -
            • Collaborate with others in real time with comments and co-authoring
            • -
            • Save your documents to OneDrive and access them from any device
            • -
            • Export your documents as PDFs, web pages, or other formats
            • -
            -

            Microsoft Word is available for Windows, Mac, iOS, Android, and web browsers. You can download it in different ways depending on your needs and preferences.

            -

            How to Download Microsoft Word for Free

            -

            If you want to use Microsoft Word for free, you can download the web version from the internet. The web version lets you use most of the basic features of Microsoft Word online. However, it does not have some of the advanced features and options that the desktop version has. Here's how to download Microsoft Word for free:

            -

            Sign in to your Microsoft account

            -

            To use Microsoft Word online, you need to have a Microsoft account. A Microsoft account is a free account that gives you access to various online services from Microsoft. If you already have a Microsoft account, you can sign in with your email address and password. If you don't have one, you can create one for free by following these steps:

            -

            How to download Microsoft Word for free
            -Download Microsoft Word for Windows 10
            -Download Microsoft Word for Mac
            -Download Microsoft Word app for Android
            -Download Microsoft Word app for iOS
            -Download Microsoft Word templates
            -Download Microsoft Word 2016
            -Download Microsoft Word 2019
            -Download Microsoft Word 2021
            -Download Microsoft Word trial version
            -Download Microsoft Word online
            -Download Microsoft Word offline installer
            -Download Microsoft Word with product key
            -Download Microsoft Word with crack
            -Download Microsoft Word with Office 365
            -Download Microsoft Word without Office 365
            -Download Microsoft Word without subscription
            -Download Microsoft Word without internet
            -Download Microsoft Word without signing in
            -Download Microsoft Word without virus
            -Download M word resume builder
            -Download M word document converter
            -Download M word document recovery tool
            -Download M word document viewer
            -Download M word document editor
            -Download M word password remover
            -Download M word password recovery tool
            -Download M word spell checker
            -Download M word grammar checker
            -Download M word plagiarism checker
            -Download M word citation generator
            -Download M word PDF converter
            -Download M word PDF reader
            -Download M word PDF editor
            -Download M word PDF merger
            -Download M word PDF splitter
            -Download M word PDF compressor
            -Download M word PDF password remover
            -Download M word PDF password recovery tool
            -Download M word to JPG converter
            -Download M word to PNG converter
            -Download M word to HTML converter
            -Download M word to EPUB converter
            -Download M word to MOBI converter
            -Download M word to TXT converter
            -Download M word to RTF converter
            -Download M word to CSV converter
            -Download M word to XML converter

            -
              -
            1. Go to https://account.microsoft.com/account
            2. -
            3. Click on Create a new account
            4. -
            5. Enter your email address or phone number and create a password
            6. -
            7. Verify your identity with a code sent to your email address or phone number
            8. -
            9. Agree to the terms and conditions and privacy policy
            10. -
            -

            Congratulations, you have created your Microsoft account. You can now sign in to use Microsoft Word online.

            -

            Go to the Microsoft Word website

            -

            Once you have signed in to your Microsoft account, you can go to the official Microsoft Word website to use the web version. Here's how:

            -
              -
            1. Go to https://www.microsoft.com/en-us/microsoft-365/word
            2. -
            3. Click on the Free option at the top right corner
            4. -
            5. You will be redirected to https://office.live.com/start/Word.aspx, where you can see the Microsoft Word online interface
            6. -
            -

            You have successfully accessed the Microsoft Word website. You can now start using Microsoft Word online.

            -

            Start using Microsoft Word online

            -

            Microsoft Word online lets you create, edit, and share documents with ease and convenience. You can use most of the features and functions that you would find in the desktop version, such as formatting, inserting, reviewing, and sharing. Here are some tips on how to use Microsoft Word online:

            -
              -
            • To create a new document, click on the New blank document button or choose from the templates available
            • -
            • To open an existing document, click on the Open button and browse your OneDrive files or upload a file from your device
            • -
            • To save your document, click on the File menu and select Save as. You can save your document to OneDrive or download it to your device
            • -
            • To edit your document, use the ribbon tabs and buttons to format text, insert images, tables, charts, and other elements, check spelling and grammar, and more
            • -
            • To share your document, click on the Share button and choose how you want to share it. You can invite people to view or edit your document, copy a link to your document, or send your document as an attachment or a PDF
            • -
            -

            You have learned how to use Microsoft Word online. You can enjoy the benefits of using Microsoft Word for free without installing anything on your device.

            -

            How to Download Microsoft Word with a Subscription

            -

            If you want to use Microsoft Word with more features and options, you can download the desktop version with a subscription. The desktop version lets you use all the advanced features and functions of Microsoft Word offline and online. However, it requires you to pay a monthly or annual fee for a Microsoft 365 subscription. Here's how to download Microsoft Word with a subscription:

            -

            Choose a Microsoft 365 plan

            -

            Microsoft 365 is a subscription service that gives you access to various online services and applications from Microsoft, including Microsoft Word. There are different plans available for different needs and budgets. You can compare and select the best plan for you by following these steps:

            -
              -
            1. Go to https://www.microsoft.com/en-us/microsoft-365/buy/compare-all-microsoft-365-products
            2. -
            3. Scroll down and see the different plans available for home, business, education, and enterprise users
            4. -
            5. Click on the plan that suits your needs and budget. You can see the details of what's included in each plan, such as the number of devices, users, storage space, apps, and features
            6. -
            7. If you are not sure which plan to choose, you can click on the Help me choose button and answer some questions to get a personalized recommendation
            8. -
            -

            You have chosen a Microsoft 365 plan that meets your requirements. You can now proceed to purchase and activate your subscription.

            -

            Purchase and activate your subscription

            -

            To use Microsoft Word with a subscription, you need to purchase and activate your Microsoft 365 subscription. You can do this online or offline depending on your preference. Here's how:

            -

            Purchase and activate your subscription online

            -
              -
            1. Go to https://www.microsoft.com/en-us/microsoft-365/buy/microsoft-365-family?market=us&rtc=1&activetab=pivot%3aoverviewtab
            2. -
            3. Select the plan that you have chosen in the previous step and click on Buy now
            4. -
            5. Sign in with your Microsoft account or create one if you don't have one already
            6. -
            7. Enter your payment details and confirm your purchase
            8. -
            9. You will receive an email confirmation with a link to activate your subscription
            10. -
            11. Click on the link and follow the instructions to activate your subscription
            12. -
            -

            You have successfully purchased and activated your Microsoft 365 subscription online. You can now download and install Microsoft Word on your device.

            -

            Purchase and activate your subscription offline

            -
              -
            1. Go to a retail store that sells Microsoft 365 subscriptions and buy the plan that you have chosen in the previous step
            2. -
            3. You will receive a product key card with a 25-digit code that you can use to activate your subscription
            4. -
            5. Go to https://setup.office.com/ and sign in with your Microsoft account or create one if you don't have one already
            6. -
            7. Enter the 25-digit code from the product key card and follow the instructions to activate your subscription
            8. -
            -

            You have successfully purchased and activated your Microsoft 365 subscription offline. You can now download and install Microsoft Word on your device.

            -

            Download and install Microsoft Word on your device

            -

            Once you have activated your Microsoft 365 subscription, you can download and install the desktop app of Microsoft Word on your PC or Mac. Here's how:

            -
              -
            1. Go to https://www.office.com/ and sign in with your Microsoft account
            2. -
            3. Click on the Install Office button at the top right corner and select Office 365 apps
            4. -
            5. A setup file will be downloaded to your device. Run the file and follow the instructions to install Microsoft Word on your device
            6. -
            7. Once the installation is complete, you can launch Microsoft Word from the Start menu on Windows or the Applications folder on Mac
            8. -
            -

            You have successfully downloaded and installed Microsoft Word on your device. You can now use all the features and functions of Microsoft Word offline and online.

            -

            How to Download Microsoft Word as a Standalone Product

            -

            If you want to use Microsoft Word without a subscription, you can download it as a standalone product. The standalone product lets you use Microsoft Word as a one-time purchase without any recurring fees. However, it does not have some of the latest features and updates that the subscription version has. Here's how to download Microsoft Word as a standalone product:

            -

            Choose a Microsoft Office product

            -

            Microsoft Office is a suite of productivity software that includes Microsoft Word and other applications, such as Excel, PowerPoint, Outlook, and more. There are different products available for different needs and budgets. You can compare and select the best product for you by following these steps:

            -
              -
            1. Go to https://www.microsoft.com/en-us/microsoft-365/buy/compare-all-microsoft-365-products?market=us&activetab=tab%3aprimaryr2
            2. -
            3. Scroll down and see the different products available for home, business, education, and enterprise users
            4. -
            5. Click on the product that suits your needs and budget. You can see the details of what's included in each product, such as the number of devices, users, storage space, apps, and features
            6. -
            7. If you are not sure which product to choose, you can click on the Help me choose button and answer some questions to get a personalized recommendation
            8. -
            -

            You have chosen a Microsoft Office product that meets your requirements. You can now proceed to purchase and activate your product.

            -

            Purchase and activate your product

            -

            To use Microsoft Word as a standalone product, you need to purchase and activate your Microsoft Office product. You can do this online or offline depending on your preference. Here's how:

            -

            Purchase and activate your product online

            -
              -
            1. Go to https://www.microsoft.com/en-us/microsoft-365/buy/microsoft-365-family?market=us&rtc=1&activetab=pivot%3aoverviewtab
            2. -
            3. Select the product that you have chosen in the previous step and click on Buy now
            4. -
            5. Sign in with your Microsoft account or create one if you don't have one already
            6. -
            7. Enter your payment details and confirm your purchase
            8. -
            9. You will receive an email confirmation with a link to activate your product
            10. -
            11. Click on the link and follow the instructions to activate your product
            12. -
            -

            You have successfully purchased and activated your Microsoft Office product online. You can now download and install Microsoft Word on your device.

            -

            Purchase and activate your product offline

            -
              -
            1. Go to a retail store that sells Microsoft Office products and buy the product that you have chosen in the previous step
            2. -
            3. You will receive a product key card with a 25-digit code that you can use to activate your product
            4. -
            5. Go to https://setup.office.com/ and sign in with your Microsoft account or create one if you don't have one already
            6. -
            7. Enter the 25-digit code from the product key card and follow the instructions to activate your product
            8. -
            -

            You have successfully purchased and activated your Microsoft Office product offline. You can now download and install Microsoft Word on your device.

            -

            Download and install Microsoft Word on your device

            -

            Once you have activated your Microsoft Office product, you can download and install the desktop app of Microsoft Word on your PC or Mac. Here's how:

            -
              -
            1. Go to https://www.office.com/ and sign in with your Microsoft account
            2. -
            3. Click on the Install Office button at the top right corner and select the product that you have purchased
            4. -
            5. A setup file will be downloaded to your device. Run the file and follow the instructions to install Microsoft Word on your device
            6. -
            7. Once the installation is complete, you can launch Microsoft Word from the Start menu on Windows or the Applications folder on Mac
            8. -
            -

            You have successfully downloaded and installed Microsoft Word on your device. You can now use Microsoft Word as a standalone product without a subscription.

            -

            Conclusion

            -

            In this article, we have shown you three ways to download Microsoft Word: for free, with a subscription, or as a standalone product. Each way has its own advantages and disadvantages depending on your needs and preferences. You can choose the best way for you by comparing the features, benefits, and costs of each option.

            -

            Microsoft Word is a powerful and versatile word processing software that can help you create, edit, and share documents with ease and professionalism. Whether you need it for personal, academic, or professional purposes, Microsoft Word can meet your expectations and requirements. Download Microsoft Word today and start creating amazing documents.

            -

            Frequently Asked Questions

            -

            Here are some of the most common questions that people ask about downloading Microsoft Word:

            -

            Q: Can I use Microsoft Word on my phone or tablet?

            -

            A: Yes, you can use Microsoft Word on your phone or tablet by downloading the mobile app from the App Store or Google Play Store. The mobile app lets you use most of the features of Microsoft Word on your device. However, some features may not be available or may require a subscription to use.

            -

            Q: Can I use Microsoft Word offline?

            -

            A: Yes, you can use Microsoft Word offline if you have downloaded the desktop app or the mobile app on your device. You can create, edit, and save documents offline without an internet connection. However, some features may not work or may require an internet connection to use, such as sharing, syncing, or updating.

            -

            Q: Can I use Microsoft Word with other applications?

            -

            A: Yes, you can use Microsoft Word with other applications that are compatible with it. For example, you can use Microsoft Word with Excel, PowerPoint, Outlook, OneNote, Teams, and more. You can also use Microsoft Word with third-party applications that support it, such as Google Drive, Dropbox, Adobe Acrobat, Grammarly, and more.

            -

            Q: How can I update Microsoft Word?

            -

            A: You can update Microsoft Word by following these steps:

            -
              -
            1. If you have downloaded Microsoft Word with a subscription, you can update it automatically or manually through the Office app or the Microsoft Store app on your device
            2. -
            3. If you have downloaded Microsoft Word as a standalone product, you can update it manually through the File menu or the Help menu on your device
            4. -
            5. If you are using Microsoft Word online, you don't need to update it as it is always up to date
            6. -
            -

            Q: How can I get help with Microsoft Word?

            -

            A: You can get help with Microsoft Word by following these steps:

            -
              -
            1. If you need help with using Microsoft Word, you can use the Help menu or the Tell me what you want to do box on your device. You can also visit https://support.microsoft.com/en-us/office/word-help-center-bd8 8c3f-8f4d-9bf8-2337-6799f44d2e5a for online tutorials, videos, and articles
            2. -
            3. If you need help with troubleshooting Microsoft Word, you can use the Support menu or the Contact Support button on your device. You can also visit https://support.microsoft.com/en-us/contactus/ for online chat, phone, or email support
            4. -
            5. If you need help with feedback or suggestions for Microsoft Word, you can use the Feedback menu or the Send a Smile or Send a Frown button on your device. You can also visit https://word.uservoice.com/ for online forums, surveys, and polls
            6. -
            -

            You have learned how to get help with Microsoft Word. You can use these resources to improve your skills and experience with Microsoft Word.

            197e85843d
            -
            -
            \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download QuickShortcutMaker and Customize Your Android Home Screen.md b/spaces/fatiXbelha/sd/Download QuickShortcutMaker and Customize Your Android Home Screen.md deleted file mode 100644 index 9a67b5901e6bbe1fb61838c2dbe10e124219adf4..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download QuickShortcutMaker and Customize Your Android Home Screen.md +++ /dev/null @@ -1,120 +0,0 @@ - -

            Download QuickShortcutMaker: A Simple and Useful App for Android Users

            -

            Have you ever wished you could access your favorite apps and activities on your Android device with just one tap? Do you want to customize your home screen and make it more personalized? If you answered yes to any of these questions, then you should download QuickShortcutMaker, a simple and useful app that lets you create shortcuts for apps and activities on your device.

            -

            QuickShortcutMaker is an app that allows you to create shortcuts to any app or activity that is installed on your device. You can choose from a list of activities that are available on your device, or search for them by name. You can also customize the name and icon of the shortcut, and place it on your home screen. This way, you can access your favorite apps and activities with just one tap, without having to navigate through your device every time.

            -

            download quickshortcutmaker


            DOWNLOAD ->->->-> https://urllie.com/2uNzyJ



            -

            In this article, we will show you how to download QuickShortcutMaker for Android devices, how to use it to create shortcuts for apps and activities, and what are the benefits and drawbacks of using this app. We will also suggest some alternatives to QuickShortcutMaker that you can try if you are looking for more features or options. So, let's get started!

            -

            How to Download QuickShortcutMaker for Android Devices

            -

            Downloading QuickShortcutMaker for Android devices is very easy and fast. You can follow these simple steps:

            -
              -
            1. Go to the official website of QuickShortcutMaker or Google Play Store and search for the app.
            2. -
            3. Tap on the download button and wait for the installation process to complete.
            4. -
            5. Open the app and grant the necessary permissions for it to work properly.
            6. -
            -

            Congratulations! You have successfully downloaded QuickShortcutMaker for your Android device. Now, you can start creating shortcuts for apps and activities.

            -

            How to Use QuickShortcutMaker to Create Shortcuts for Apps and Activities

            -

            Using QuickShortcutMaker to create shortcuts for apps and activities is very simple and fun. You can follow these easy steps:

            -
              -
            1. Select an app or an activity from the list that appears on the app. You can also use the search bar to find the app or activity that you want.
            2. -
            3. Customize the name and icon of the shortcut. You can use the default name and icon, or change them according to your preference.
            4. -
            5. Tap on the create button and place the shortcut on your home screen. You can drag and drop the shortcut wherever you want.
            6. -
            -

            That's it! You have successfully created shortcuts for apps and activities using QuickShortcutMaker. Now, you can enjoy the convenience and functionality of this app.

            -

            How to download quickshortcutmaker app for android
            -Download quickshortcutmaker apk latest version
            -Quickshortcutmaker app review and features
            -Best alternatives to quickshortcutmaker app
            -Quickshortcutmaker app download for PC
            -How to use quickshortcutmaker app to create shortcuts
            -Quickshortcutmaker app vs Nova Launcher
            -Quickshortcutmaker app not working - how to fix it
            -How to uninstall quickshortcutmaker app from android
            -Download quickshortcutmaker app from FileHippo[^1^]
            -Download quickshortcutmaker app from Softonic[^2^]
            -Download quickshortcutmaker app from APKCombo[^3^]
            -How to update quickshortcutmaker app on android
            -Quickshortcutmaker app permissions and privacy
            -How to backup and restore quickshortcutmaker app data
            -How to customize quickshortcutmaker app settings
            -Quickshortcutmaker app tips and tricks
            -How to download quickshortcutmaker app for iOS
            -Quickshortcutmaker app download for Mac
            -Quickshortcutmaker app download for Windows 10
            -How to download quickshortcutmaker app for Kindle Fire
            -Quickshortcutmaker app download for Chromebook
            -Quickshortcutmaker app download for Samsung Galaxy
            -Quickshortcutmaker app download for Huawei devices
            -Quickshortcutmaker app download for Xiaomi devices
            -How to download quickshortcutmaker app for Android TV
            -Quickshortcutmaker app download for Fire TV Stick
            -Quickshortcutmaker app download for Roku
            -Quickshortcutmaker app download for Smart TV
            -Quickshortcutmaker app download for Android Auto
            -How to download quickshortcutmaker app for Android Wear OS
            -Quickshortcutmaker app download for Fitbit devices
            -Quickshortcutmaker app download for Garmin devices
            -Quickshortcutmaker app download for Amazfit devices
            -Quickshortcutmaker app download for Pebble devices
            -How to download quickshortcutmaker app for Android Go devices
            -Quickshortcutmaker app download for Android One devices
            -Quickshortcutmaker app download for Android Enterprise devices
            -Quickshortcutmaker app download for Android Studio emulator
            -Quickshortcutmaker app download for Bluestacks emulator

            -

            Benefits of Using QuickShortcutMaker for Android Users

            -

            Using QuickShortcutMaker for Android users has many benefits that can enhance your user experience and satisfaction. Here are some of the benefits that you can get from using this app:

            -
              -
            • Save time and effort by accessing your favorite apps and activities with one tap. You don't have to waste time scrolling through your device or searching for the app or activity that you want. You can simply tap on the shortcut and launch it instantly.
            • -
            • Organize your home screen and make it more personalized. You can arrange your shortcuts according to your preference and style. You can also change the name and icon of the shortcut to make it more appealing and recognizable.
            • -
            • Explore hidden features and settings of your device. You can discover and access activities that are not normally visible or accessible on your device. For example, you can create a shortcut to the developer options, the battery saver mode, or the screen recorder.
            • -
            -

            These are just some of the benefits that you can get from using QuickShortcutMaker for Android users. There are many more features and functions that you can explore and enjoy with this app.

            -

            Drawbacks of Using QuickShortcutMaker for Android Users

            -

            However, using QuickShortcutMaker for Android users also has some drawbacks that you should be aware of before downloading and using this app. Here are some of the drawbacks that you may encounter when using this app:

            -
              -
            • The app may not work on some devices or Android versions. Some users have reported that the app does not work on their devices or that it crashes frequently. This may be due to compatibility issues or bugs in the app.
            • -
            • The app may create duplicate or unwanted shortcuts. Sometimes, the app may create shortcuts that are already existing on your device or that you don't need or want. This may clutter your home screen and cause confusion.
            • -
            • The app may pose security risks if not downloaded from a trusted source. If you download the app from an unknown or unverified source, you may expose your device to malware or viruses that can harm your device or steal your data.
            • -
            -

            These are some of the drawbacks that you may face when using QuickShortcutMaker for Android users. You should be careful and cautious when downloading and using this app, and always check the source and reviews of the app before installing it.

            -

            Alternatives to QuickShortcutMaker for Android Users

            -

            If you are not satisfied with QuickShortcutMaker or if you are looking for more features or options, you can try some alternatives to QuickShortcutMaker that are available for Android users. Here are some of the alternatives that you can check out:

            - - - - - -
            NameDescription
            Nova LauncherA powerful and customizable launcher that lets you create shortcuts, widgets, gestures, themes, icons, and more for your home screen.
            Shortcut MakerA simple and easy-to-use app that lets you create shortcuts for apps, activities, files, folders, contacts, settings, and more on your device.
            Activity LauncherA lightweight and minimalistic app that lets you launch any activity on your device, even those that are hidden or not accessible by default.
            -

            These are some of the alternatives to QuickShortcutMaker that you can try if you want more features or options for creating shortcuts on your device.

            -

            Conclusion: Download QuickShortcutMaker Today and Enjoy Its Features

            -

            In conclusion, QuickShortcutMaker is a simple and useful app that lets you create shortcuts for apps and activities on your Android device. You can download it from the official website or Google Play Store, and use it to create shortcuts for your favorite apps and activities. You can also customize the name and icon of the shortcut, and place it on your home screen. By using this app, you can save time and effort by accessing your favorite apps and activities with one tap, organize your home screen and make it more personalized, and explore hidden features and settings of your device.

            -

            However, you should also be aware of the drawbacks of using this app, such as compatibility issues, duplicate or unwanted shortcuts, and security risks. You should always download the app from a trusted source, and check the reviews and ratings of the app before installing it. You should also try some alternatives to QuickShortcutMaker if you are looking for more features or options for creating shortcuts on your device.

            -

            If you are interested in downloading QuickShortcutMaker for Android devices, you can click on the link below and follow the instructions. You can also read some of the FAQs that we have prepared for you to answer some of your common questions. We hope you enjoy using QuickShortcutMaker and its features. Thank you for reading this article and have a great day!

            -

            Download QuickShortcutMaker here

            -

            FAQs

            -
              -
            • Is QuickShortcutMaker safe to use?
            • -

              QuickShortcutMaker is safe to use if you download it from a trusted source, such as the official website or Google Play Store. However, you should be careful when downloading it from other sources, as they may contain malware or viruses that can harm your device or steal your data. You should also check the permissions that the app requests and only grant those that are necessary for the app to work properly.

              -
            • How do I delete a shortcut that I created with QuickShortcutMaker?
            • -

              To delete a shortcut that you created with QuickShortcutMaker, you can simply tap and hold on the shortcut and drag it to the trash icon that appears on your home screen. Alternatively, you can go to the app settings and tap on the delete button next to the shortcut that you want to remove.

              -
            • How do I update QuickShortcutMaker to the latest version?
            • -

              To update QuickShortcutMaker to the latest version, you can go to the official website or Google Play Store and check if there is a new version available. If there is, you can tap on the update button and wait for the installation process to complete. You can also enable the auto-update option in your device settings to automatically update the app whenever there is a new version.

              -
            • What are some of the activities that I can create shortcuts for with QuickShortcutMaker?
            • -

              Some of the activities that you can create shortcuts for with QuickShortcutMaker are:

              -
                -
              • Developer options
              • -
              • Battery saver mode
              • -
              • Screen recorder
              • -
              • Accessibility settings
              • -
              • Wi-Fi settings
              • -
              • Bluetooth settings
              • -
              • Airplane mode
              • -
              • VPN settings
              • -
              • Language settings
              • -
              • Date and time settings
              • -
              • And many more!
              • -
              -
            • Can I use QuickShortcutMaker on other devices besides Android?
            • -

              No, QuickShortcutMaker is only compatible with Android devices. However, there may be similar apps or tools that you can use on other devices, such as iOS or Windows. You can search for them online or in your device's app store.

              -

            197e85843d
            -
            -
            \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Rifts RPG PDFs for Free - Explore the Megaverse of Adventure.md b/spaces/fatiXbelha/sd/Download Rifts RPG PDFs for Free - Explore the Megaverse of Adventure.md deleted file mode 100644 index e1dbbaac7b31ce43d59bd25cd895f66958dc0323..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Rifts RPG PDFs for Free - Explore the Megaverse of Adventure.md +++ /dev/null @@ -1,93 +0,0 @@ - -

            Rifts RPG PDF Download Free

            -

            Rifts RPG is one of the most popular and versatile role-playing games in the world. It combines science fiction, fantasy, horror, and adventure in a post-apocalyptic setting where magic and technology coexist. If you are a fan of Rifts RPG or want to try it out for the first time, you might be wondering how to download it for free. In this article, we will show you how to find and download Rifts RPG PDF files from various sources. We will also discuss the benefits and drawbacks of downloading Rifts RPG PDF files for free.

            -

            rifts rpg pdf download free


            DOWNLOADhttps://urllie.com/2uNEJN



            -

            What is Rifts RPG?

            -

            Rifts RPG is a game system created by Palladium Books in 1990. It is based on the premise that a cataclysmic event called the Great Cataclysm opened rifts in space and time all over the world, unleashing magic and strange creatures. The world of Rifts is a chaotic and dangerous place where humans struggle to survive among aliens, monsters, mutants, cyborgs, robots, and supernatural beings. The game allows players to create characters from hundreds of different races, classes, skills, powers, and equipment. The game also features a rich and detailed setting that spans multiple dimensions and planets.

            -

            How to Download Rifts RPG PDF for Free

            -

            If you want to download Rifts RPG PDF files for free, you have several options. Here are some of the most common ones:

            -

            Google Drive

            -

            One of the easiest ways to download Rifts RPG PDF files for free is to use Google Drive. Google Drive is a cloud storage service that allows you to upload, store, share, and access files online. There is a Google Drive folder that contains many Rifts PDF files that you can download for free. You can access it by clicking here. The folder has over 50 Rifts books in PDF format, including the main rulebook, world books, dimension books, sourcebooks, adventure books, and more. However, keep in mind that these files may not be updated or complete.

            -

            Palladium Books

            -

            Another way to download Rifts RPG PDF files for free is to visit the official website of Palladium Books, the publisher of Rifts RPG. Palladium Books is the company that owns the rights to Rifts RPG and produces all the official books and materials for the game. You can visit their website by clicking here. On their website, you can find a lot of information about Rifts RPG, such as news, updates, previews, forums, podcasts, and more. You can also buy or download Rifts PDF files from their online store. However, not all Rifts books are available in PDF format, and some of them are not free. You will need to pay a certain amount of money to download some Rifts PDF files from Palladium Books.

            -

            DriveThruRPG

            -

            A third way to download Rifts RPG PDF files for free is to use DriveThruRPG. DriveThruRPG is a website that sells digital products for various role-playing games, including Rifts RPG. You can visit their website by clicking here. On their website, you can browse, buy, or download Rifts PDF files from different categories, such as core rulebooks, supplements, adventures, maps, and more. Some of the Rifts PDF files on DriveThruRPG are free, while others are not. You will need to create an account and add the Rifts PDF files you want to your cart before you can download them.

            -

            Benefits of Downloading Rifts RPG PDF for Free

            -

            Downloading Rifts RPG PDF files for free has some benefits that can enhance your gaming experience. Here are some of them:

            -

            rifts rpg core rulebook pdf free download
            -rifts rpg world books pdf free download
            -rifts rpg dimension books pdf free download
            -rifts rpg sourcebook one pdf free download
            -rifts rpg ultimate edition pdf free download
            -rifts rpg book of magic pdf free download
            -rifts rpg coalition wars pdf free download
            -rifts rpg conversion book one pdf free download
            -rifts rpg adventure sourcebook pdf free download
            -rifts rpg black market pdf free download
            -rifts rpg vampire kingdoms pdf free download
            -rifts rpg atlantis pdf free download
            -rifts rpg england pdf free download
            -rifts rpg africa pdf free download
            -rifts rpg triax and the ngr pdf free download
            -rifts rpg south america pdf free download
            -rifts rpg underseas pdf free download
            -rifts rpg juicer uprising pdf free download
            -rifts rpg coalition war campaign pdf free download
            -rifts rpg psyscape pdf free download
            -rifts rpg lone star pdf free download
            -rifts rpg new west pdf free download
            -rifts rpg spirit west pdf free download
            -rifts rpg federation of magic pdf free download
            -rifts rpg japan pdf free download
            -rifts rpg australia pdf free download
            -rifts rpg canada pdf free download
            -rifts rpg china one pdf free download
            -rifts rpg china two pdf free download
            -rifts rpg mindwerks pdf free download
            -rifts rpg mechanoids pdf free download
            -rifts rpg coalition navy pdf free download
            -rifts rpg bionics sourcebook pdf free download
            -rifts rpg merc ops pdf free download
            -rifts rpg mercenaries pdf free download
            -rifts rpg shemarrian nation pdf free download
            -rifts rpg wormwood pdf free download
            -rifts rpg phase world pdf free download
            -rifts rpg phase world sourcebook pdf free download
            -rifts rpg skraypers pdf free download
            -rifts rpg the anvil galaxy pdf free download
            -rifts rpg the three galaxies pdf free download
            -rifts rpg naruni wave two pdf free download
            -rifts rpg hades pits of hell pdf free download
            -rifts rpg dyval hell unleashed pdf free download
            -rifts rpg dimensional outbreak pdf free download
            -rifts rpg phaseworld fleets of the three galaxies pdf free download

            -

            Convenience

            -

            One of the main benefits of downloading Rifts RPG PDF files for free is convenience. You can access Rifts PDF files anytime, anywhere, without carrying heavy books around. You can also save space and money by not buying physical books that can take up a lot of room and cost a lot of money. You can simply store Rifts PDF files on your device or cloud service and open them whenever you want to play or read.

            -

            Compatibility

            -

            Another benefit of downloading Rifts RPG PDF files for free is compatibility. You can use Rifts PDF files with any device that supports PDF format, such as laptops, tablets, or phones. You can also use different apps or programs to view, edit, or print Rifts PDF files according to your needs. You can zoom in or out, search for keywords, bookmark pages, highlight text, add notes, or print copies of Rifts PDF files with ease.

            -

            Customization

            -

            A third benefit of downloading Rifts RPG PDF files for free is customization. You can edit, annotate, or print Rifts PDF files according to your preferences. You can modify the text size, font, color, or layout of Rifts PDF files to suit your reading style. You can also add comments, suggestions, or feedback to Rifts PDF files to share with other players or GMs. You can also print out specific pages or sections of Rifts PDF files that you need for your game session.

            -

            Drawbacks of Downloading Rifts RPG PDF for Free

            -

            However, downloading Rifts RPG PDF files for free also has some drawbacks that can affect your gaming experience. Here are some of them:

            -

            Legality

            -

            One of the main drawbacks of downloading Rifts RPG PDF files for free is legality. You might be violating the intellectual property rights of Palladium Books by downloading Rifts PDF files without their permission. Palladium Books is the sole owner and creator of Rifts RPG and its related products. They have the exclusive right to distribute, sell, or license Rifts RPG and its PDF files. By downloading Rifts PDF files from unauthorized sources, you might be infringing on their rights and breaking the law.

            -

            Quality

            -

            Another drawback of downloading Rifts RPG PDF files for free is quality. You might encounter low-quality, incomplete, or corrupted Rifts PDF files that can ruin your gaming experience. You might find that some Rifts PDF files are missing pages, images, tables, or text. You might also find that some Rifts PDF files are poorly scanned, formatted, or edited. You might also find that some Rifts PDF files are outdated or incompatible with the latest rules or editions of Rifts RPG.

            -

            Security

            -

            A third drawback of downloading Rifts RPG PDF files for free is security. You might expose your device to malware, viruses, or phishing attacks by downloading Rifts PDF files from untrusted websites. You might also risk your personal information, such as your email, password, or credit card details, by downloading Rifts PDF files from dubious sources. You might also encounter pop-ups, ads, or redirects that can annoy you or harm your device.

            -

            Conclusion

            -

            Rifts RPG is a great game that offers endless possibilities for adventure and fun. If you want to download Rifts RPG PDF files for free, you have several options to choose from. However, you should also be aware of the benefits and drawbacks of downloading Rifts PDF files for free. You should weigh the pros and cons carefully before you decide to download Rifts PDF files for free. You should also respect the rights and efforts of Palladium Books and support them by buying their official products if you can. We hope this article has helped you learn more about Rifts RPG and its PDF files.

            -

            FAQs

            -

            Here are some frequently asked questions and answers about Rifts RPG and its PDF files:

            -

            Q: What are the minimum requirements to play Rifts RPG?

            -

            A: To play Rifts RPG, you need at least the main rulebook, which contains the basic rules, character creation, combat, magic, psionics, equipment, and setting information. You also need dice, paper, pencils, and a group of friends to play with. You can also use other books, such as world books, dimension books, sourcebooks, or adventure books, to expand your game options and scenarios.

            -

            Q: How many editions of Rifts RPG are there?

            -

            A: There are two main editions of Rifts RPG: the original edition and the ultimate edition. The original edition was published in 1990 and has over 80 books in print. The ultimate edition was published in 2005 and revised and updated the main rulebook with new rules, content, artwork, and layout. The ultimate edition is compatible with most of the original edition books, except for a few that have been replaced or updated.

            -

            Q: How can I convert Rifts RPG to other game systems?

            -

            A: Rifts RPG is a unique game system that has its own mechanics and terminology. However, there are some unofficial conversion guides and tools that can help you convert Rifts RPG to other game systems, such as D&D, GURPS, Savage Worlds, or Fate. You can find some of them online by searching for "Rifts conversion" on Google or other search engines.

            -

            Q: How can I create my own Rifts PDF files?

            -

            A: If you want to create your own Rifts PDF files, you need to have a copy of the original book in physical or digital form. You also need a scanner or a camera to capture the images of the book pages. You also need a software program that can convert images to PDF format, such as Adobe Acrobat or online converters. You also need a software program that can edit PDF files, such as Adobe Acrobat or online editors. You can then scan or take pictures of the book pages, convert them to PDF format, and edit them as you wish.

            -

            Q: How can I share my Rifts PDF files with others?

            -

            A: If you want to share your Rifts PDF files with others, you need to have a way to upload, store, and share them online. You can use cloud storage services, such as Google Drive, Dropbox, or OneDrive, to upload and store your Rifts PDF files online. You can then share the links to your Rifts PDF files with others via email, social media, or messaging apps. However, you should be careful about who you share your Rifts PDF files with and respect the rights of Palladium Books and other authors.

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bus Simulator Ultimate Hack APK The Best Simulation Game for Android.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bus Simulator Ultimate Hack APK The Best Simulation Game for Android.md deleted file mode 100644 index 6ee60bdcd2fb4ef5c6805726f96a18567f568a6b..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Bus Simulator Ultimate Hack APK The Best Simulation Game for Android.md +++ /dev/null @@ -1,70 +0,0 @@ - -

            Download Bus Simulator Ultimate Hack APK: How to Get Unlimited Money and More

            -

            Do you love driving buses and exploring different cities? Do you want to experience the realistic bus simulation game with amazing graphics and features? If yes, then you should try Bus Simulator Ultimate, one of the most popular and realistic bus simulation games for Android devices. But what if you want to get unlimited money, no ads, and all the buses, maps, and skins unlocked in the game? Well, in that case, you need to download Bus Simulator Ultimate hack apk, a modified version of the game that gives you access to all the premium features for free. In this article, we will show you how to download Bus Simulator Ultimate hack apk and what are the features of this hacked version.

            -

            download bus simulator ultimate hack apk


            Download Filehttps://gohhs.com/2uPmMd



            -

            Introduction

            -

            What is Bus Simulator Ultimate?

            -

            Bus Simulator Ultimate is a bus simulation game developed by Zuuks Games, a Turkish game studio that also created other popular simulation games like Truck Simulator 2018 and Euro Truck Driver. In this game, you can drive various types of buses across different cities and countries, such as Germany, USA, France, Italy, Spain, Turkey, and more. You can also create your own bus company and manage your fleet, routes, staff, and income. The game features realistic bus physics, traffic system, weather conditions, passengers' reactions, and sounds. You can also customize your buses with different skins, colors, accessories, and logos.

            -

            Why download Bus Simulator Ultimate hack apk?

            -

            Bus Simulator Ultimate is a free-to-play game that you can download from Google Play Store or App Store. However, the game also has some in-app purchases that require real money. For example, you need to buy coins or gems to unlock new buses, maps, or skins. You also need to watch ads to get some rewards or bonuses. These limitations can be frustrating for some players who want to enjoy the game without spending money or watching ads. That's why some people prefer to download Bus Simulator Ultimate hack apk, a modified version of the game that gives them unlimited money, no ads, and all the buses, maps, and skins unlocked for free.

            -

            How to download Bus Simulator Ultimate hack apk

            -

            Step 1: Find a reliable source

            -

            The first step to download Bus Simulator Ultimate hack apk is to find a reliable source that offers the latest version of the hacked apk file. There are many websites that claim to provide the hacked apk file, but not all of them are trustworthy. Some of them may contain viruses or malware that can harm your device or steal your personal information. Therefore, you need to be careful and do some research before downloading any apk file from unknown sources. One of the reliable sources that we recommend is [APKdone](^1^), a website that provides safe and tested apk files for various Android games and apps.

            -

            Step 2: Enable unknown sources on your device

            -

            The next step is to enable unknown sources on your device. This is necessary because Android devices do not allow installing apps from sources other than Google Play Store by default. To enable unknown sources, you need to go to your device's settings > security > unknown sources and toggle it on. This will allow you to install apps from third-party sources.

            -

            bus simulator ultimate mod apk unlimited money
            -how to hack bus simulator ultimate game
            -bus simulator ultimate apk download for android
            -bus simulator ultimate cheats and tips
            -free download bus simulator ultimate modded version
            -bus simulator ultimate hack tool online
            -bus simulator ultimate latest apk with hack
            -bus simulator ultimate mod apk no ads
            -bus simulator ultimate game hack apk download
            -bus simulator ultimate unlimited money and gold apk
            -best bus simulator ultimate hack apk for android
            -bus simulator ultimate hack apk 2023 latest version
            -download bus simulator ultimate mod apk revdl
            -bus simulator ultimate hack apk free shopping
            -bus simulator ultimate mod apk all buses unlocked
            -bus simulator ultimate hack apk without root
            -download bus simulator ultimate mod apk happymod
            -bus simulator ultimate hack apk unlimited everything
            -bus simulator ultimate mod apk offline
            -bus simulator ultimate hack version download for android
            -bus simulator ultimate mod apk rexdl
            -how to download bus simulator ultimate hack apk on pc
            -bus simulator ultimate hack apk obb
            -download bus simulator ultimate mod apk android 1
            -bus simulator ultimate hack apk ios

            -

            Step 3: Download and install the apk file

            -

            The third step is to download and install the apk file from the source that you have chosen. To do this, you need to visit the website and look for the download button or link for Bus Simulator Ultimate hack apk. Then, you need to tap on the download button or link and wait for the download to complete. After that, you need to locate the downloaded apk file on your device's file manager and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish.

            -

            Step 4: Launch the game and enjoy the hack features

            -

            The final step is to launch the game and enjoy the hack features. To do this, you need to find the game icon on your device's home screen or app drawer and tap on it to open it. You will see a message that says "Bus Simulator Ultimate Hack APK" on the loading screen, which means that the hack is working. You can now access all the premium features of the game for free, such as unlimited money, no ads, all buses, maps, and skins unlocked. You can also play the game offline without any internet connection.

            -

            Features of Bus Simulator Ultimate hack apk

            -

            Unlimited money

            -

            One of the main features of Bus Simulator Ultimate hack apk is unlimited money. Money is the currency of the game that you can use to buy new buses, upgrade your buses, hire new drivers, expand your routes, and more. With unlimited money, you can buy anything you want in the game without worrying about running out of money. You can also increase your income by completing missions, earning bonuses, and attracting more passengers.

            -

            No ads

            -

            Another feature of Bus Simulator Ultimate hack apk is no ads. Ads are annoying and distracting, especially when they pop up in the middle of the game or when you are trying to enjoy the scenery. They can also slow down your device or consume your data. With no ads, you can play the game smoothly and without any interruptions. You can also save your battery and data by playing offline.

            -

            All buses unlocked

            -

            A third feature of Bus Simulator Ultimate hack apk is all buses unlocked. Buses are the main attraction of the game, as they allow you to drive different types of buses across different cities and countries. The game has over 30 buses to choose from, each with its own specifications, features, and design. However, not all of them are available from the start. You need to unlock them by spending money or gems, or by reaching certain levels or achievements. With all buses unlocked, you can access any bus you want from the beginning without spending any money or gems. You can also switch between different buses anytime you want.

            -

            All maps unlocked

            -

            A fourth feature of Bus Simulator Ultimate hack apk is all maps unlocked. Maps are the locations where you can drive your buses and explore different places. The game has over 10 maps to choose from, each with its own scenery, landmarks, roads, traffic, weather, and passengers. However, not all of them are available from the start. You need to unlock them by spending money or gems, or by reaching certain levels or achievements. With all maps unlocked, you can access any map you want from the beginning without spending any money or gems. You can also travel between different maps anytime you want.

            -

            All skins unlocked

            -

            A fifth feature of Bus Simulator Ultimate hack apk is all skins unlocked. Skins are the cosmetic items that you can use to customize your buses with different colors, patterns, accessories, and logos. The game has over 50 skins to choose from, each with its own style and theme. However, not all of them are available from the start. You need to unlock them by spending money or gems, or by reaching certain levels or achievements. With all skins unlocked, you can access any skin you want from the beginning without spending any money or gems. You can also change your bus's appearance anytime you want.

            -

            Conclusion

            -

            Bus Simulator Ultimate is a fun and realistic bus simulation game that lets you drive various types of buses across different cities and countries. However, if you want to get unlimited money, no ads, and all the buses, maps, and skins unlocked in the game, you need to download Bus Simulator Ultimate hack apk, a modified version of the game that gives you access to all the premium features for free. In this article, we have shown you how to download Bus Simulator Ultimate hack apk and what are the features of this hacked version. We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Happy bus driving!

            -

            FAQs

            -

            Here are some frequently asked questions about Bus Simulator Ultimate hack apk:

            -

            Is Bus Simulator Ultimate hack apk safe to use?

            -

            Yes, Bus Simulator Ultimate hack apk is safe to use as long as you download it from a reliable source like APKdone. We have tested the apk file and found no viruses or malware that can harm your device or steal your personal information. However, you should always be careful and scan any apk file before installing it on your device.

            -

            Is Bus Simulator Ultimate hack apk legal to use?

            -

            No, Bus Simulator Ultimate hack apk is not legal to use as it violates the terms and conditions of the original game. By using the hacked version, you are bypassing the security measures and cheating the game developers. This can result in legal actions or penalties from the game developers or Google Play Store. Therefore, we do not recommend or endorse using the hacked version of the game.

            -

            Will Bus Simulator Ultimate hack apk work on my device?

            -

            Bus Simulator Ultimate hack apk should work on most Android devices that support the original game. However, some devices may not be compatible with the hacked version due to different specifications or settings. To check if your device is compatible, you can try installing the original game first and see if it runs smoothly. If it does, then you can try installing the hacked version and see if it works.

            -

            How can I update Bus Simulator Ultimate hack apk?

            -

            Bus Simulator Ultimate hack apk is not available on Google Play Store, so you cannot update it automatically like other apps. To update the hacked version, you need to visit the source website and look for the latest version of the apk file. Then, you need to download and install it on your device like before. However, you should always backup your game data before updating, as some updates may erase your progress or cause errors.

            -

            Can I play Bus Simulator Ultimate hack apk online with other players?

            -

            No, Bus Simulator Ultimate hack apk is not compatible with online mode or multiplayer mode. The hacked version is only for offline mode or single-player mode. If you try to play online with other players, you may face errors or bans from the game servers. Therefore, we advise you to play offline only and avoid connecting to the internet while playing.

            197e85843d
            -
            -
            \ No newline at end of file diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/mime-types/README.md b/spaces/fffiloni/controlnet-animation-doodle/node_modules/mime-types/README.md deleted file mode 100644 index 48d2fb477241e837c6e8d349777aac312746029b..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/mime-types/README.md +++ /dev/null @@ -1,113 +0,0 @@ -# mime-types - -[![NPM Version][npm-version-image]][npm-url] -[![NPM Downloads][npm-downloads-image]][npm-url] -[![Node.js Version][node-version-image]][node-version-url] -[![Build Status][ci-image]][ci-url] -[![Test Coverage][coveralls-image]][coveralls-url] - -The ultimate javascript content-type utility. - -Similar to [the `mime@1.x` module](https://www.npmjs.com/package/mime), except: - -- __No fallbacks.__ Instead of naively returning the first available type, - `mime-types` simply returns `false`, so do - `var type = mime.lookup('unrecognized') || 'application/octet-stream'`. -- No `new Mime()` business, so you could do `var lookup = require('mime-types').lookup`. -- No `.define()` functionality -- Bug fixes for `.lookup(path)` - -Otherwise, the API is compatible with `mime` 1.x. - -## Install - -This is a [Node.js](https://nodejs.org/en/) module available through the -[npm registry](https://www.npmjs.com/). Installation is done using the -[`npm install` command](https://docs.npmjs.com/getting-started/installing-npm-packages-locally): - -```sh -$ npm install mime-types -``` - -## Adding Types - -All mime types are based on [mime-db](https://www.npmjs.com/package/mime-db), -so open a PR there if you'd like to add mime types. - -## API - -```js -var mime = require('mime-types') -``` - -All functions return `false` if input is invalid or not found. - -### mime.lookup(path) - -Lookup the content-type associated with a file. - -```js -mime.lookup('json') // 'application/json' -mime.lookup('.md') // 'text/markdown' -mime.lookup('file.html') // 'text/html' -mime.lookup('folder/file.js') // 'application/javascript' -mime.lookup('folder/.htaccess') // false - -mime.lookup('cats') // false -``` - -### mime.contentType(type) - -Create a full content-type header given a content-type or extension. -When given an extension, `mime.lookup` is used to get the matching -content-type, otherwise the given content-type is used. Then if the -content-type does not already have a `charset` parameter, `mime.charset` -is used to get the default charset and add to the returned content-type. - -```js -mime.contentType('markdown') // 'text/x-markdown; charset=utf-8' -mime.contentType('file.json') // 'application/json; charset=utf-8' -mime.contentType('text/html') // 'text/html; charset=utf-8' -mime.contentType('text/html; charset=iso-8859-1') // 'text/html; charset=iso-8859-1' - -// from a full path -mime.contentType(path.extname('/path/to/file.json')) // 'application/json; charset=utf-8' -``` - -### mime.extension(type) - -Get the default extension for a content-type. - -```js -mime.extension('application/octet-stream') // 'bin' -``` - -### mime.charset(type) - -Lookup the implied default charset of a content-type. - -```js -mime.charset('text/markdown') // 'UTF-8' -``` - -### var type = mime.types[extension] - -A map of content-types by extension. - -### [extensions...] = mime.extensions[type] - -A map of extensions by content-type. - -## License - -[MIT](LICENSE) - -[ci-image]: https://badgen.net/github/checks/jshttp/mime-types/master?label=ci -[ci-url]: https://github.com/jshttp/mime-types/actions/workflows/ci.yml -[coveralls-image]: https://badgen.net/coveralls/c/github/jshttp/mime-types/master -[coveralls-url]: https://coveralls.io/r/jshttp/mime-types?branch=master -[node-version-image]: https://badgen.net/npm/node/mime-types -[node-version-url]: https://nodejs.org/en/download -[npm-downloads-image]: https://badgen.net/npm/dm/mime-types -[npm-url]: https://npmjs.org/package/mime-types -[npm-version-image]: https://badgen.net/npm/v/mime-types diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io-adapter/dist/index.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io-adapter/dist/index.d.ts deleted file mode 100644 index efed03151b1dc86bc3e3f38a1d0b41c854f2b0fa..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/socket.io-adapter/dist/index.d.ts +++ /dev/null @@ -1,179 +0,0 @@ -/// -import { EventEmitter } from "events"; -/** - * A public ID, sent by the server at the beginning of the Socket.IO session and which can be used for private messaging - */ -export type SocketId = string; -/** - * A private ID, sent by the server at the beginning of the Socket.IO session and used for connection state recovery - * upon reconnection - */ -export type PrivateSessionId = string; -export type Room = string; -export interface BroadcastFlags { - volatile?: boolean; - compress?: boolean; - local?: boolean; - broadcast?: boolean; - binary?: boolean; - timeout?: number; -} -export interface BroadcastOptions { - rooms: Set; - except?: Set; - flags?: BroadcastFlags; -} -interface SessionToPersist { - sid: SocketId; - pid: PrivateSessionId; - rooms: Room[]; - data: unknown; -} -export type Session = SessionToPersist & { - missedPackets: unknown[][]; -}; -export declare class Adapter extends EventEmitter { - readonly nsp: any; - rooms: Map>; - sids: Map>; - private readonly encoder; - /** - * In-memory adapter constructor. - * - * @param {Namespace} nsp - */ - constructor(nsp: any); - /** - * To be overridden - */ - init(): Promise | void; - /** - * To be overridden - */ - close(): Promise | void; - /** - * Returns the number of Socket.IO servers in the cluster - * - * @public - */ - serverCount(): Promise; - /** - * Adds a socket to a list of room. - * - * @param {SocketId} id the socket id - * @param {Set} rooms a set of rooms - * @public - */ - addAll(id: SocketId, rooms: Set): Promise | void; - /** - * Removes a socket from a room. - * - * @param {SocketId} id the socket id - * @param {Room} room the room name - */ - del(id: SocketId, room: Room): Promise | void; - private _del; - /** - * Removes a socket from all rooms it's joined. - * - * @param {SocketId} id the socket id - */ - delAll(id: SocketId): void; - /** - * Broadcasts a packet. - * - * Options: - * - `flags` {Object} flags for this packet - * - `except` {Array} sids that should be excluded - * - `rooms` {Array} list of rooms to broadcast to - * - * @param {Object} packet the packet object - * @param {Object} opts the options - * @public - */ - broadcast(packet: any, opts: BroadcastOptions): void; - /** - * Broadcasts a packet and expects multiple acknowledgements. - * - * Options: - * - `flags` {Object} flags for this packet - * - `except` {Array} sids that should be excluded - * - `rooms` {Array} list of rooms to broadcast to - * - * @param {Object} packet the packet object - * @param {Object} opts the options - * @param clientCountCallback - the number of clients that received the packet - * @param ack - the callback that will be called for each client response - * - * @public - */ - broadcastWithAck(packet: any, opts: BroadcastOptions, clientCountCallback: (clientCount: number) => void, ack: (...args: any[]) => void): void; - private _encode; - /** - * Gets a list of sockets by sid. - * - * @param {Set} rooms the explicit set of rooms to check. - */ - sockets(rooms: Set): Promise>; - /** - * Gets the list of rooms a given socket has joined. - * - * @param {SocketId} id the socket id - */ - socketRooms(id: SocketId): Set | undefined; - /** - * Returns the matching socket instances - * - * @param opts - the filters to apply - */ - fetchSockets(opts: BroadcastOptions): Promise; - /** - * Makes the matching socket instances join the specified rooms - * - * @param opts - the filters to apply - * @param rooms - the rooms to join - */ - addSockets(opts: BroadcastOptions, rooms: Room[]): void; - /** - * Makes the matching socket instances leave the specified rooms - * - * @param opts - the filters to apply - * @param rooms - the rooms to leave - */ - delSockets(opts: BroadcastOptions, rooms: Room[]): void; - /** - * Makes the matching socket instances disconnect - * - * @param opts - the filters to apply - * @param close - whether to close the underlying connection - */ - disconnectSockets(opts: BroadcastOptions, close: boolean): void; - private apply; - private computeExceptSids; - /** - * Send a packet to the other Socket.IO servers in the cluster - * @param packet - an array of arguments, which may include an acknowledgement callback at the end - */ - serverSideEmit(packet: any[]): void; - /** - * Save the client session in order to restore it upon reconnection. - */ - persistSession(session: SessionToPersist): void; - /** - * Restore the session and find the packets that were missed by the client. - * @param pid - * @param offset - */ - restoreSession(pid: PrivateSessionId, offset: string): Promise; -} -export declare class SessionAwareAdapter extends Adapter { - readonly nsp: any; - private readonly maxDisconnectionDuration; - private sessions; - private packets; - constructor(nsp: any); - persistSession(session: SessionToPersist): void; - restoreSession(pid: PrivateSessionId, offset: string): Promise; - broadcast(packet: any, opts: BroadcastOptions): void; -} -export {}; diff --git a/spaces/firestalker/anime-tts/transforms.py b/spaces/firestalker/anime-tts/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/firestalker/anime-tts/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/florim/MedGPT/tests/integration/memory_tests.py b/spaces/florim/MedGPT/tests/integration/memory_tests.py deleted file mode 100644 index eead2da1cfa9b8a99592939623955808fc430068..0000000000000000000000000000000000000000 --- a/spaces/florim/MedGPT/tests/integration/memory_tests.py +++ /dev/null @@ -1,49 +0,0 @@ -import random -import string -import sys -import unittest -from pathlib import Path - -from autogpt.config import Config -from autogpt.memory.local import LocalCache - - -class TestLocalCache(unittest.TestCase): - def random_string(self, length): - return "".join(random.choice(string.ascii_letters) for _ in range(length)) - - def setUp(self): - cfg = cfg = Config() - self.cache = LocalCache(cfg) - self.cache.clear() - - # Add example texts to the cache - self.example_texts = [ - "The quick brown fox jumps over the lazy dog", - "I love machine learning and natural language processing", - "The cake is a lie, but the pie is always true", - "ChatGPT is an advanced AI model for conversation", - ] - - for text in self.example_texts: - self.cache.add(text) - - # Add some random strings to test noise - for _ in range(5): - self.cache.add(self.random_string(10)) - - def test_get_relevant(self): - query = "I'm interested in artificial intelligence and NLP" - k = 3 - relevant_texts = self.cache.get_relevant(query, k) - - print(f"Top {k} relevant texts for the query '{query}':") - for i, text in enumerate(relevant_texts, start=1): - print(f"{i}. {text}") - - self.assertEqual(len(relevant_texts), k) - self.assertIn(self.example_texts[1], relevant_texts) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/frapochetti/blurry-faces/app.py b/spaces/frapochetti/blurry-faces/app.py deleted file mode 100644 index 4004110d228f4e9414683619fc0bea75f42f4bc4..0000000000000000000000000000000000000000 --- a/spaces/frapochetti/blurry-faces/app.py +++ /dev/null @@ -1,70 +0,0 @@ -import cv2 -import gradio as gr -from typing import Union, Tuple -from PIL import Image, ImageOps -import numpy as np -import torch - -model = torch.jit.load('./model/model.pt').eval() - -def resize_with_padding(img: Image.Image, expected_size: Tuple[int, int]) -> Image.Image: - img.thumbnail((expected_size[0], expected_size[1])) - delta_width = expected_size[0] - img.size[0] - delta_height = expected_size[1] - img.size[1] - pad_width = delta_width // 2 - pad_height = delta_height // 2 - padding = (pad_width, pad_height, delta_width - pad_width, delta_height - pad_height) - return ImageOps.expand(img, padding), padding - -def preprocess_image(img: Image.Image, size: int = 512) -> Tuple[Image.Image, torch.tensor, Tuple[int]]: - pil_img, padding = resize_with_padding(img, (size, size)) - - img = (np.array(pil_img).astype(np.float32) / 255) - np.array([0.485, 0.456, 0.406], dtype=np.float32).reshape(1, 1, 3) - img = img / np.array([0.229, 0.224, 0.225], dtype=np.float32).reshape(1, 1, 3) - img = np.transpose(img, (2, 0, 1)) - - return pil_img, torch.tensor(img[None]), padding - -def soft_blur_with_mask(image: Image.Image, mask: torch.tensor, padding: Tuple[int]) -> Image.Image: - image = np.array(image) - # Create a blurred copy of the original image. - blurred_image = cv2.GaussianBlur(image, (221, 221), sigmaX=20, sigmaY=20) - image_height, image_width = image.shape[:2] - mask = cv2.resize(mask.astype(np.uint8), (image_width, image_height), interpolation=cv2.INTER_NEAREST) - # Blurring the mask itself to get a softer mask with no firm edges - mask = cv2.GaussianBlur(mask.astype(np.float32), (11, 11), 10, 10)[:, :, None] - - # Take the blurred image where the mask it positive, and the original image where the image is original - image = (mask * blurred_image + (1.0 - mask) * image) - pad_w, pad_h, _, _ = padding - img_w, img_h, _ = image.shape - image = image[(pad_h):(img_h-pad_h), (pad_w):(img_w-pad_w), :] - return Image.fromarray(image.astype(np.uint8)) - -def run(image, size): - pil_image, torch_image, padding = preprocess_image(image, size=size) - - with torch.inference_mode(): - mask = model(torch_image) - mask = mask.argmax(dim=1).numpy().squeeze() - - return soft_blur_with_mask(pil_image, mask, padding) - -content_image_input = gr.inputs.Image(label="Content Image", type="pil") -model_image_size = gr.inputs.Radio([256, 384, 512, 1024], type="value", default=512, label="Inference size") - -description="Privacy first! Upload an image of a groupf of people and blur their faces automatically." -article=""" -Demo built on top of a face segmentation model trained from scratch with IceVision on the -FaceSynthetics dataset. -""" -examples = [["./images/girls.jpeg", 384], ["./images/kid.jpeg", 256], ["./images/family.jpeg", 512], ["./images/crowd1.jpeg", 1024], ["./images/crowd2.jpeg", 1024]] - -app_interface = gr.Interface(fn=run, - inputs=[content_image_input, model_image_size], - outputs="image", - title="Blurry Faces", - description=description, - examples=examples, - article=article) -app_interface.launch() \ No newline at end of file diff --git a/spaces/frgfm/torch-cam/README.md b/spaces/frgfm/torch-cam/README.md deleted file mode 100644 index fdf1e1cc3a19617d206b177e73fa1ed835b7f2d6..0000000000000000000000000000000000000000 --- a/spaces/frgfm/torch-cam/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: TorchCAM -emoji: 🎨 -colorFrom: purple -colorTo: pink -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/utils/__init__.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/utils/__init__.py deleted file mode 100644 index ac489e2dbbc0e6fa87f5088b4edcc20f8cadc1a6..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/utils/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .collect_env import collect_env -from .logger import get_root_logger - -__all__ = ['get_root_logger', 'collect_env'] diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Candy Crush Jelly Saga 2.37.27 Apk Mod (Unlimited All Unlocked) for android The Ultimate Guide and Tips.md b/spaces/gotiQspiryo/whisper-ui/examples/Candy Crush Jelly Saga 2.37.27 Apk Mod (Unlimited All Unlocked) for android The Ultimate Guide and Tips.md deleted file mode 100644 index 848a4579465e981109c66777a8622e2cd3678b1d..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Candy Crush Jelly Saga 2.37.27 Apk Mod (Unlimited All Unlocked) for android The Ultimate Guide and Tips.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Candy Crush Jelly Saga 2.37.27 Apk Mod (Unlimited All Unlocked) for android


            Download ===> https://urlgoal.com/2uyLm5



            - - aaccfb2cb3
            -
            -
            -

            diff --git a/spaces/gradio/HuBERT/examples/latent_depth/latent_depth_src/models/latent_multilingual_transformer.py b/spaces/gradio/HuBERT/examples/latent_depth/latent_depth_src/models/latent_multilingual_transformer.py deleted file mode 100644 index 12b7e67d0336e54be05f9fdec49df2b7d4c7ae29..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/latent_depth/latent_depth_src/models/latent_multilingual_transformer.py +++ /dev/null @@ -1,75 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq.models import register_model, register_model_architecture -from fairseq.models.multilingual_transformer import MultilingualTransformerModel -from fairseq.models.transformer import ( - TransformerDecoder, - TransformerEncoder, - base_architecture, -) - -from .latent_transformer import LatentTransformerDecoder, LatentTransformerEncoder - - -@register_model("latent_multilingual_transformer") -class LatentMultilingualTransformerModel(MultilingualTransformerModel): - """A variant of standard multilingual Transformer models which encoder and/or - decoders supports latent depth, as is in "Deep Transformer with Latent Depth" - (https://arxiv.org/abs/2009.13102). - """ - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - MultilingualTransformerModel.add_args(parser) - parser.add_argument( - '--soft-select', - action='store_true', - help='use soft samples in training an inference', - ) - parser.add_argument( - '--sampling-tau', - type=float, - default=5., - help='sampling temperature', - ) - - @classmethod - def _get_module_class(cls, is_encoder, args, lang_dict, embed_tokens, langs): - if is_encoder: - if hasattr(args, "encoder_latent_layer") and args.encoder_latent_layer: - return LatentTransformerEncoder( - args, lang_dict, embed_tokens, num_logits=len(langs) - ) - else: - return TransformerEncoder(args, lang_dict, embed_tokens) - else: - if hasattr(args, "decoder_latent_layer") and args.decoder_latent_layer: - return LatentTransformerDecoder( - args, lang_dict, embed_tokens, num_logits=len(langs) - ) - else: - return TransformerDecoder(args, lang_dict, embed_tokens) - - -@register_model_architecture( - "latent_multilingual_transformer", "latent_multilingual_transformer" -) -def latent_multilingual_architecture(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 1024) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4) - args.encoder_layers = getattr(args, "encoder_layers", 12) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 1024) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4) - args.decoder_layers = getattr(args, "decoder_layers", 24) - args.share_encoders = getattr(args, "share_encoders", True) - args.share_decoders = getattr(args, "share_decoders", True) - args.share_encoder_embeddings = getattr(args, "share_encoder_embeddings", True) - args.share_decoder_embeddings = getattr(args, "share_decoder_embeddings", True) - - base_architecture(args) diff --git a/spaces/gradio/HuBERT/fairseq/benchmark/dummy_model.py b/spaces/gradio/HuBERT/fairseq/benchmark/dummy_model.py deleted file mode 100644 index ff26e4fe655d8e8d7f9942c4bd3df7cd267405fb..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/benchmark/dummy_model.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.nn as nn -import torch.nn.functional as F -from fairseq.data import Dictionary -from fairseq.models import ( - FairseqDecoder, - FairseqLanguageModel, - register_model, - register_model_architecture, -) - - -@register_model("dummy_model") -class DummyModel(FairseqLanguageModel): - def __init__(self, args, encoder): - super().__init__(encoder) - self.args = args - - @staticmethod - def add_args(parser): - parser.add_argument("--num-layers", type=int, default=24) - parser.add_argument("--embed-dim", type=int, default=1024) - - @classmethod - def build_model(cls, args, task): - encoder = DummyEncoder( - num_embed=len(task.target_dictionary), - embed_dim=args.embed_dim, - num_layers=args.num_layers, - ) - return cls(args, encoder) - - def forward(self, src_tokens, masked_tokens=None, **kwargs): - return self.decoder(src_tokens, masked_tokens=masked_tokens) - - -class DummyEncoder(FairseqDecoder): - def __init__(self, num_embed=50000, embed_dim=1024, num_layers=24): - super().__init__(Dictionary()) - self.embed = nn.Embedding( - num_embeddings=num_embed, embedding_dim=embed_dim, padding_idx=0 - ) - self.layers_a = nn.ModuleList( - [ - nn.Sequential( - nn.LayerNorm(embed_dim), - nn.Linear(embed_dim, 3 * embed_dim), # q, k, v input projection - nn.Linear(3 * embed_dim, embed_dim), # skip self-attention - nn.Linear(embed_dim, embed_dim), # output projection - nn.Dropout(), - ) - for i in range(num_layers) - ] - ) - self.layers_b = nn.ModuleList( - [ - nn.Sequential( - nn.LayerNorm(embed_dim), - nn.Linear(embed_dim, 4 * embed_dim), # FFN - nn.ReLU(), - nn.Linear(4 * embed_dim, embed_dim), # FFN - nn.Dropout(0.1), - ) - for i in range(num_layers) - ] - ) - self.out_proj = nn.Linear(embed_dim, num_embed) - - def forward(self, tokens, masked_tokens=None): - x = self.embed(tokens) - for layer_a, layer_b in zip(self.layers_a, self.layers_b): - x = x + layer_a(x) - x = x + layer_b(x) - x = self.out_proj(x) - if masked_tokens is not None: - x = x[masked_tokens] - return (x,) - - def max_positions(self): - return 1024 - - def get_normalized_probs(self, net_output, log_probs, sample=None): - logits = net_output[0].float() - if log_probs: - return F.log_softmax(logits, dim=-1) - else: - return F.softmax(logits, dim=-1) - - -@register_model_architecture("dummy_model", "dummy_model") -def base_architecture(args): - pass diff --git a/spaces/gradio/text_analysis/setup.sh b/spaces/gradio/text_analysis/setup.sh deleted file mode 100644 index ad8bb8ee847c128cbc233e57fa8f1b0d62c84d4e..0000000000000000000000000000000000000000 --- a/spaces/gradio/text_analysis/setup.sh +++ /dev/null @@ -1 +0,0 @@ -python -m spacy download en_core_web_sm \ No newline at end of file diff --git a/spaces/gulabpatel/Real-ESRGAN/README.md b/spaces/gulabpatel/Real-ESRGAN/README.md deleted file mode 100644 index 6abbb35c7143b84131bf42d3c4c1e88de0af2403..0000000000000000000000000000000000000000 --- a/spaces/gulabpatel/Real-ESRGAN/README.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -title: Real ESRGAN -emoji: 🏃 -colorFrom: blue -colorTo: blue -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/torch_utils/persistence.py b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/torch_utils/persistence.py deleted file mode 100644 index 718e480d44479029d224be2629acda269070491f..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/torch_utils/persistence.py +++ /dev/null @@ -1,262 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Facilities for pickling Python code alongside other data. - -The pickled code is automatically imported into a separate Python module -during unpickling. This way, any previously exported pickles will remain -usable even if the original code is no longer available, or if the current -version of the code is not consistent with what was originally pickled.""" - -import sys -import pickle -import io -import inspect -import copy -import uuid -import types -import dnnlib - -# ---------------------------------------------------------------------------- - -_version = 6 # internal version number -_decorators = set() # {decorator_class, ...} -_import_hooks = [] # [hook_function, ...] -_module_to_src_dict = dict() # {module: src, ...} -_src_to_module_dict = dict() # {src: module, ...} - -# ---------------------------------------------------------------------------- - - -def persistent_class(orig_class): - r"""Class decorator that extends a given class to save its source code - when pickled. - - Example: - - from torch_utils import persistence - - @persistence.persistent_class - class MyNetwork(torch.nn.Module): - def __init__(self, num_inputs, num_outputs): - super().__init__() - self.fc = MyLayer(num_inputs, num_outputs) - ... - - @persistence.persistent_class - class MyLayer(torch.nn.Module): - ... - - When pickled, any instance of `MyNetwork` and `MyLayer` will save its - source code alongside other internal state (e.g., parameters, buffers, - and submodules). This way, any previously exported pickle will remain - usable even if the class definitions have been modified or are no - longer available. - - The decorator saves the source code of the entire Python module - containing the decorated class. It does *not* save the source code of - any imported modules. Thus, the imported modules must be available - during unpickling, also including `torch_utils.persistence` itself. - - It is ok to call functions defined in the same module from the - decorated class. However, if the decorated class depends on other - classes defined in the same module, they must be decorated as well. - This is illustrated in the above example in the case of `MyLayer`. - - It is also possible to employ the decorator just-in-time before - calling the constructor. For example: - - cls = MyLayer - if want_to_make_it_persistent: - cls = persistence.persistent_class(cls) - layer = cls(num_inputs, num_outputs) - - As an additional feature, the decorator also keeps track of the - arguments that were used to construct each instance of the decorated - class. The arguments can be queried via `obj.init_args` and - `obj.init_kwargs`, and they are automatically pickled alongside other - object state. A typical use case is to first unpickle a previous - instance of a persistent class, and then upgrade it to use the latest - version of the source code: - - with open('old_pickle.pkl', 'rb') as f: - old_net = pickle.load(f) - new_net = MyNetwork(*old_obj.init_args, **old_obj.init_kwargs) - misc.copy_params_and_buffers(old_net, new_net, require_all=True) - """ - assert isinstance(orig_class, type) - if is_persistent(orig_class): - return orig_class - - assert orig_class.__module__ in sys.modules - orig_module = sys.modules[orig_class.__module__] - orig_module_src = _module_to_src(orig_module) - - class Decorator(orig_class): - _orig_module_src = orig_module_src - _orig_class_name = orig_class.__name__ - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self._init_args = copy.deepcopy(args) - self._init_kwargs = copy.deepcopy(kwargs) - assert orig_class.__name__ in orig_module.__dict__ - _check_pickleable(self.__reduce__()) - - @property - def init_args(self): - return copy.deepcopy(self._init_args) - - @property - def init_kwargs(self): - return dnnlib.EasyDict(copy.deepcopy(self._init_kwargs)) - - def __reduce__(self): - fields = list(super().__reduce__()) - fields += [None] * max(3 - len(fields), 0) - if fields[0] is not _reconstruct_persistent_obj: - meta = dict(type='class', version=_version, module_src=self._orig_module_src, - class_name=self._orig_class_name, state=fields[2]) - fields[0] = _reconstruct_persistent_obj # reconstruct func - fields[1] = (meta,) # reconstruct args - fields[2] = None # state dict - return tuple(fields) - - Decorator.__name__ = orig_class.__name__ - _decorators.add(Decorator) - return Decorator - -# ---------------------------------------------------------------------------- - - -def is_persistent(obj): - r"""Test whether the given object or class is persistent, i.e., - whether it will save its source code when pickled. - """ - try: - if obj in _decorators: - return True - except TypeError: - pass - return type(obj) in _decorators # pylint: disable=unidiomatic-typecheck - -# ---------------------------------------------------------------------------- - - -def import_hook(hook): - r"""Register an import hook that is called whenever a persistent object - is being unpickled. A typical use case is to patch the pickled source - code to avoid errors and inconsistencies when the API of some imported - module has changed. - - The hook should have the following signature: - - hook(meta) -> modified meta - - `meta` is an instance of `dnnlib.EasyDict` with the following fields: - - type: Type of the persistent object, e.g. `'class'`. - version: Internal version number of `torch_utils.persistence`. - module_src Original source code of the Python module. - class_name: Class name in the original Python module. - state: Internal state of the object. - - Example: - - @persistence.import_hook - def wreck_my_network(meta): - if meta.class_name == 'MyNetwork': - print('MyNetwork is being imported. I will wreck it!') - meta.module_src = meta.module_src.replace("True", "False") - return meta - """ - assert callable(hook) - _import_hooks.append(hook) - -# ---------------------------------------------------------------------------- - - -def _reconstruct_persistent_obj(meta): - r"""Hook that is called internally by the `pickle` module to unpickle - a persistent object. - """ - meta = dnnlib.EasyDict(meta) - meta.state = dnnlib.EasyDict(meta.state) - for hook in _import_hooks: - meta = hook(meta) - assert meta is not None - - assert meta.version == _version - module = _src_to_module(meta.module_src) - - assert meta.type == 'class' - orig_class = module.__dict__[meta.class_name] - decorator_class = persistent_class(orig_class) - obj = decorator_class.__new__(decorator_class) - - setstate = getattr(obj, '__setstate__', None) - if callable(setstate): - setstate(meta.state) # pylint: disable=not-callable - else: - obj.__dict__.update(meta.state) - return obj - -# ---------------------------------------------------------------------------- - - -def _module_to_src(module): - r"""Query the source code of a given Python module. - """ - src = _module_to_src_dict.get(module, None) - if src is None: - src = inspect.getsource(module) - _module_to_src_dict[module] = src - _src_to_module_dict[src] = module - return src - - -def _src_to_module(src): - r"""Get or create a Python module for the given source code. - """ - module = _src_to_module_dict.get(src, None) - if module is None: - module_name = "_imported_module_" + uuid.uuid4().hex - module = types.ModuleType(module_name) - sys.modules[module_name] = module - _module_to_src_dict[module] = src - _src_to_module_dict[src] = module - exec(src, module.__dict__) # pylint: disable=exec-used - return module - -# ---------------------------------------------------------------------------- - - -def _check_pickleable(obj): - r"""Check that the given object is pickleable, raising an exception if - it is not. This function is expected to be considerably more efficient - than actually pickling the object. - """ - def recurse(obj): - if isinstance(obj, (list, tuple, set)): - return [recurse(x) for x in obj] - if isinstance(obj, dict): - return [[recurse(x), recurse(y)] for x, y in obj.items()] - if isinstance(obj, (str, int, float, bool, bytes, bytearray)): - return None # Python primitive types are pickleable. - if f'{type(obj).__module__}.{type(obj).__name__}' in ['numpy.ndarray', 'torch.Tensor']: - return None # NumPy arrays and PyTorch tensors are pickleable. - if is_persistent(obj): - # Persistent objects are pickleable, by virtue of the constructor check. - return None - return obj - with io.BytesIO() as f: - pickle.dump(recurse(obj), f) - -# ---------------------------------------------------------------------------- diff --git a/spaces/haakohu/deep_privacy2/dp2/data/datasets/__init__.py b/spaces/haakohu/deep_privacy2/dp2/data/datasets/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/hanan217/QQsign/README.md b/spaces/hanan217/QQsign/README.md deleted file mode 100644 index bd56881a2a7709591343e2f15af9a6a8133e115b..0000000000000000000000000000000000000000 --- a/spaces/hanan217/QQsign/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: QQsign -emoji: 🦀 -colorFrom: blue -colorTo: purple -sdk: docker -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/utils/big_model_loading.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/utils/big_model_loading.py deleted file mode 100644 index 8f322612300c7d735f85cf76687ab35e5eb38d20..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/utils/big_model_loading.py +++ /dev/null @@ -1,80 +0,0 @@ -import numpy as np -import torch -import torch.nn as nn - -from collections import OrderedDict - - -def tf2th(conv_weights): - """Possibly convert HWIO to OIHW.""" - if conv_weights.ndim == 4: - conv_weights = conv_weights.transpose([3, 2, 0, 1]) - return torch.from_numpy(conv_weights) - - -def _rename_conv_weights_for_deformable_conv_layers(state_dict, cfg): - import re - layer_keys = sorted(state_dict.keys()) - for ix, stage_with_dcn in enumerate(cfg.MODEL.RESNETS.STAGE_WITH_DCN, 1): - if not stage_with_dcn: - continue - for old_key in layer_keys: - pattern = ".*block{}.*conv2.*".format(ix) - r = re.match(pattern, old_key) - if r is None: - continue - for param in ["weight", "bias"]: - if old_key.find(param) is -1: - continue - if 'unit01' in old_key: - continue - new_key = old_key.replace( - "conv2.{}".format(param), "conv2.conv.{}".format(param) - ) - print("pattern: {}, old_key: {}, new_key: {}".format( - pattern, old_key, new_key - )) - # Calculate SD conv weight - w = state_dict[old_key] - v, m = torch.var_mean(w, dim=[1, 2, 3], keepdim=True, unbiased=False) - w = (w - m) / torch.sqrt(v + 1e-10) - - state_dict[new_key] = w - del state_dict[old_key] - return state_dict - - -def load_big_format(cfg, f): - model = OrderedDict() - weights = np.load(f) - - cmap = {'a':1, 'b':2, 'c':3} - for key, val in weights.items(): - old_key = key.replace('resnet/', '') - if 'root_block' in old_key: - new_key = 'root.conv.weight' - elif '/proj/standardized_conv2d/kernel' in old_key: - key_pattern = old_key.replace('/proj/standardized_conv2d/kernel', '').replace('resnet/', '') - bname, uname, cidx = key_pattern.split('/') - new_key = '{}.downsample.{}.conv{}.weight'.format(bname,uname,cmap[cidx]) - elif '/standardized_conv2d/kernel' in old_key: - key_pattern = old_key.replace('/standardized_conv2d/kernel', '').replace('resnet/', '') - bname, uname, cidx = key_pattern.split('/') - new_key = '{}.{}.conv{}.weight'.format(bname,uname,cmap[cidx]) - elif '/group_norm/gamma' in old_key: - key_pattern = old_key.replace('/group_norm/gamma', '').replace('resnet/', '') - bname, uname, cidx = key_pattern.split('/') - new_key = '{}.{}.gn{}.weight'.format(bname,uname,cmap[cidx]) - elif '/group_norm/beta' in old_key: - key_pattern = old_key.replace('/group_norm/beta', '').replace('resnet/', '') - bname, uname, cidx = key_pattern.split('/') - new_key = '{}.{}.gn{}.bias'.format(bname,uname,cmap[cidx]) - else: - print('Unknown key {}'.format(old_key)) - continue - print('Map {} -> {}'.format(key, new_key)) - model[new_key] = tf2th(val) - - model = _rename_conv_weights_for_deformable_conv_layers(model, cfg) - - return dict(model=model) diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/utils/encoding.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/utils/encoding.py deleted file mode 100644 index e8654706c345e8a13219f2c8e4cfa7700f531612..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/utils/encoding.py +++ /dev/null @@ -1,188 +0,0 @@ -##+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ -## Created by: Hang Zhang -## ECE Department, Rutgers University -## Email: zhang.hang@rutgers.edu -## Copyright (c) 2017 -## -## This source code is licensed under the MIT-style license found in the -## LICENSE file in the root directory of this source tree -##+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ - -"""Encoding Data Parallel""" -import threading -import functools -import torch -from torch.autograd import Variable, Function -import torch.cuda.comm as comm -from torch.nn.parallel.data_parallel import DataParallel -from torch.nn.parallel.parallel_apply import get_a_var -from torch.nn.parallel._functions import ReduceAddCoalesced, Broadcast - -torch_ver = torch.__version__[:3] - -__all__ = ['allreduce', 'DataParallelModel', 'DataParallelCriterion', 'patch_replication_callback'] - -def allreduce(*inputs): - """Cross GPU all reduce autograd operation for calculate mean and - variance in SyncBN. - """ - return AllReduce.apply(*inputs) - -class AllReduce(Function): - @staticmethod - def forward(ctx, num_inputs, *inputs): - ctx.num_inputs = num_inputs - ctx.target_gpus = [inputs[i].get_device() for i in range(0, len(inputs), num_inputs)] - inputs = [inputs[i:i + num_inputs] - for i in range(0, len(inputs), num_inputs)] - # sort before reduce sum - inputs = sorted(inputs, key=lambda i: i[0].get_device()) - results = comm.reduce_add_coalesced(inputs, ctx.target_gpus[0]) - outputs = comm.broadcast_coalesced(results, ctx.target_gpus) - return tuple([t for tensors in outputs for t in tensors]) - - @staticmethod - def backward(ctx, *inputs): - inputs = [i.data for i in inputs] - inputs = [inputs[i:i + ctx.num_inputs] - for i in range(0, len(inputs), ctx.num_inputs)] - results = comm.reduce_add_coalesced(inputs, ctx.target_gpus[0]) - outputs = comm.broadcast_coalesced(results, ctx.target_gpus) - return (None,) + tuple([Variable(t) for tensors in outputs for t in tensors]) - -class Reduce(Function): - @staticmethod - def forward(ctx, *inputs): - ctx.target_gpus = [inputs[i].get_device() for i in range(len(inputs))] - inputs = sorted(inputs, key=lambda i: i.get_device()) - return comm.reduce_add(inputs) - - @staticmethod - def backward(ctx, gradOutput): - return Broadcast.apply(ctx.target_gpus, gradOutput) - - -class DataParallelModel(DataParallel): - """Implements data parallelism at the module level. - - This container parallelizes the application of the given module by - splitting the input across the specified devices by chunking in the - batch dimension. - In the forward pass, the module is replicated on each device, - and each replica handles a portion of the input. During the backwards pass, gradients from each replica are summed into the original module. - Note that the outputs are not gathered, please use compatible - :class:`encoding.parallel.DataParallelCriterion`. - - The batch size should be larger than the number of GPUs used. It should - also be an integer multiple of the number of GPUs so that each chunk is - the same size (so that each GPU processes the same number of samples). - - Args: - module: module to be parallelized - device_ids: CUDA devices (default: all devices) - - Reference: - Hang Zhang, Kristin Dana, Jianping Shi, Zhongyue Zhang, Xiaogang Wang, Ambrish Tyagi, - Amit Agrawal. “Context Encoding for Semantic Segmentation. - *The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018* - - Example:: - - >>> net = encoding.nn.DataParallelModel(model, device_ids=[0, 1, 2]) - >>> y = net(x) - """ - def gather(self, outputs, output_device): - return outputs - - def replicate(self, module, device_ids): - modules = super(DataParallelModel, self).replicate(module, device_ids) - return modules - - -class DataParallelCriterion(DataParallel): - """ - Calculate loss in multiple-GPUs, which balance the memory usage for - Semantic Segmentation. - - The targets are splitted across the specified devices by chunking in - the batch dimension. Please use together with :class:`encoding.parallel.DataParallelModel`. - - Reference: - Hang Zhang, Kristin Dana, Jianping Shi, Zhongyue Zhang, Xiaogang Wang, Ambrish Tyagi, - Amit Agrawal. “Context Encoding for Semantic Segmentation. - *The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018* - - Example:: - - >>> net = encoding.nn.DataParallelModel(model, device_ids=[0, 1, 2]) - >>> criterion = encoding.nn.DataParallelCriterion(criterion, device_ids=[0, 1, 2]) - >>> y = net(x) - >>> loss = criterion(y, target) - """ - def forward(self, inputs, *targets, **kwargs): - # input should be already scatterd - # scattering the targets instead - if not self.device_ids: - return self.module(inputs, *targets, **kwargs) - targets, kwargs = self.scatter(targets, kwargs, self.device_ids) - if len(self.device_ids) == 1: - return self.module(inputs, *targets[0], **kwargs[0]) - replicas = self.replicate(self.module, self.device_ids[:len(inputs)]) - outputs = _criterion_parallel_apply(replicas, inputs, targets, kwargs) - return Reduce.apply(*outputs) / len(outputs) - - -def _criterion_parallel_apply(modules, inputs, targets, kwargs_tup=None, devices=None): - assert len(modules) == len(inputs) - assert len(targets) == len(inputs) - if kwargs_tup: - assert len(modules) == len(kwargs_tup) - else: - kwargs_tup = ({},) * len(modules) - if devices is not None: - assert len(modules) == len(devices) - else: - devices = [None] * len(modules) - - lock = threading.Lock() - results = {} - if torch_ver != "0.3": - grad_enabled = torch.is_grad_enabled() - - def _worker(i, module, input, target, kwargs, device=None): - if torch_ver != "0.3": - torch.set_grad_enabled(grad_enabled) - if device is None: - device = get_a_var(input).get_device() - try: - if not isinstance(input, tuple): - input = (input,) - with torch.cuda.device(device): - output = module(*(input + target), **kwargs) - with lock: - results[i] = output - except Exception as e: - with lock: - results[i] = e - - if len(modules) > 1: - threads = [threading.Thread(target=_worker, - args=(i, module, input, target, - kwargs, device),) - for i, (module, input, target, kwargs, device) in - enumerate(zip(modules, inputs, targets, kwargs_tup, devices))] - - for thread in threads: - thread.start() - for thread in threads: - thread.join() - else: - _worker(0, modules[0], inputs[0], kwargs_tup[0], devices[0]) - - outputs = [] - for i in range(len(inputs)): - output = results[i] - if isinstance(output, Exception): - raise output - outputs.append(output) - return outputs diff --git a/spaces/hekbobo/bingo/src/components/chat-list.tsx b/spaces/hekbobo/bingo/src/components/chat-list.tsx deleted file mode 100644 index 624a78ef0d7be0f1192cf02a81e2e9cf214cb193..0000000000000000000000000000000000000000 --- a/spaces/hekbobo/bingo/src/components/chat-list.tsx +++ /dev/null @@ -1,28 +0,0 @@ -import React from 'react' - -import { Separator } from '@/components/ui/separator' -import { ChatMessage } from '@/components/chat-message' -import { ChatMessageModel } from '@/lib/bots/bing/types' - -export interface ChatList { - messages: ChatMessageModel[] -} - -export function ChatList({ messages }: ChatList) { - if (!messages.length) { - return null - } - - return ( -
            - {messages.map((message, index) => ( - - - {index < messages.length - 1 && ( - - )} - - ))} -
            - ) -} diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task069_CovidSeg.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task069_CovidSeg.py deleted file mode 100644 index 73e26f9764984423ae4634d6f394bb677489c63d..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task069_CovidSeg.py +++ /dev/null @@ -1,68 +0,0 @@ -import shutil - -from batchgenerators.utilities.file_and_folder_operations import * -import SimpleITK as sitk -from nnunet.paths import nnUNet_raw_data - -if __name__ == '__main__': - #data is available at http://medicalsegmentation.com/covid19/ - download_dir = '/home/fabian/Downloads' - - task_id = 69 - task_name = "CovidSeg" - - foldername = "Task%03.0d_%s" % (task_id, task_name) - - out_base = join(nnUNet_raw_data, foldername) - imagestr = join(out_base, "imagesTr") - imagests = join(out_base, "imagesTs") - labelstr = join(out_base, "labelsTr") - maybe_mkdir_p(imagestr) - maybe_mkdir_p(imagests) - maybe_mkdir_p(labelstr) - - train_patient_names = [] - test_patient_names = [] - - # the niftis are 3d, but they are just stacks of 2d slices from different patients. So no 3d U-Net, please - - # the training stack has 100 slices, so we split it into 5 equally sized parts (20 slices each) for cross-validation - training_data = sitk.GetArrayFromImage(sitk.ReadImage(join(download_dir, 'tr_im.nii.gz'))) - training_labels = sitk.GetArrayFromImage(sitk.ReadImage(join(download_dir, 'tr_mask.nii.gz'))) - - for f in range(5): - this_name = 'part_%d' % f - data = training_data[f::5] - labels = training_labels[f::5] - sitk.WriteImage(sitk.GetImageFromArray(data), join(imagestr, this_name + '_0000.nii.gz')) - sitk.WriteImage(sitk.GetImageFromArray(labels), join(labelstr, this_name + '.nii.gz')) - train_patient_names.append(this_name) - - shutil.copy(join(download_dir, 'val_im.nii.gz'), join(imagests, 'val_im.nii.gz')) - - test_patient_names.append('val_im') - - json_dict = {} - json_dict['name'] = task_name - json_dict['description'] = "" - json_dict['tensorImageSize'] = "4D" - json_dict['reference'] = "" - json_dict['licence'] = "" - json_dict['release'] = "0.0" - json_dict['modality'] = { - "0": "nonct", - } - json_dict['labels'] = { - "0": "background", - "1": "stuff1", - "2": "stuff2", - "3": "stuff3", - } - - json_dict['numTraining'] = len(train_patient_names) - json_dict['numTest'] = len(test_patient_names) - json_dict['training'] = [{'image': "./imagesTr/%s.nii.gz" % i.split("/")[-1], "label": "./labelsTr/%s.nii.gz" % i.split("/")[-1]} for i in - train_patient_names] - json_dict['test'] = ["./imagesTs/%s.nii.gz" % i.split("/")[-1] for i in test_patient_names] - - save_json(json_dict, os.path.join(out_base, "dataset.json")) diff --git a/spaces/huazhao/DeepDanbooru_string/README.md b/spaces/huazhao/DeepDanbooru_string/README.md deleted file mode 100644 index 4330b6f969246dc764a34ea254d2e807159f1c55..0000000000000000000000000000000000000000 --- a/spaces/huazhao/DeepDanbooru_string/README.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: DeepDanbooru String -emoji: 💬 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false -duplicated_from: NoCrypt/DeepDanbooru_string ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/huggingface-projects/stable-diffusion-multiplayer/stablediffusion-infinity/PyPatchMatch/README.md b/spaces/huggingface-projects/stable-diffusion-multiplayer/stablediffusion-infinity/PyPatchMatch/README.md deleted file mode 100644 index 12b49aadadfe0ff51c2873b2671c0ca020bc3506..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/stable-diffusion-multiplayer/stablediffusion-infinity/PyPatchMatch/README.md +++ /dev/null @@ -1,64 +0,0 @@ -PatchMatch based Inpainting -===================================== -This library implements the PatchMatch based inpainting algorithm. It provides both C++ and Python interfaces. -This implementation is heavily based on the implementation by Younesse ANDAM: -(younesse-cv/PatchMatch)[https://github.com/younesse-cv/PatchMatch], with some bugs fix. - -Usage -------------------------------------- - -You need to first install OpenCV to compile the C++ libraries. Then, run `make` to compile the -shared library `libpatchmatch.so`. - -For Python users (example available at `examples/py_example.py`) - -```python -import patch_match - -image = ... # either a numpy ndarray or a PIL Image object. -mask = ... # either a numpy ndarray or a PIL Image object. -result = patch_match.inpaint(image, mask, patch_size=5) -``` - -For C++ users (examples available at `examples/cpp_example.cpp`) - -```cpp -#include "inpaint.h" - -int main() { - cv::Mat image = ... - cv::Mat mask = ... - - cv::Mat result = Inpainting(image, mask, 5).run(); - - return 0; -} -``` - - -README and COPYRIGHT by Younesse ANDAM -------------------------------------- -@Author: Younesse ANDAM - -@Contact: younesse.andam@gmail.com - -Description: This project is a personal implementation of an algorithm called PATCHMATCH that restores missing areas in an image. -The algorithm is presented in the following paper - PatchMatch A Randomized Correspondence Algorithm - for Structural Image Editing - by C.Barnes,E.Shechtman,A.Finkelstein and Dan B.Goldman - ACM Transactions on Graphics (Proc. SIGGRAPH), vol.28, aug-2009 - - For more information please refer to - http://www.cs.princeton.edu/gfx/pubs/Barnes_2009_PAR/index.php - -Copyright (c) 2010-2011 - - -Requirements -------------------------------------- - -To run the project you need to install Opencv library and link it to your project. -Opencv can be download it here -http://opencv.org/downloads.html - diff --git a/spaces/huggingface/Model_Cards_Writing_Tool/language_model_template1.md b/spaces/huggingface/Model_Cards_Writing_Tool/language_model_template1.md deleted file mode 100644 index e333507f4ecce5d91b5aea9b24dc9f092ddfebd3..0000000000000000000000000000000000000000 --- a/spaces/huggingface/Model_Cards_Writing_Tool/language_model_template1.md +++ /dev/null @@ -1,329 +0,0 @@ ---- -{{card_data}} ---- - -{% set lm_task_entries = { - 'text-generation': { - 'direct_use': "The model can be used for text generation.", - 'downstream_use': "To learn more about this task and potential downstream uses, see the Hugging Face [text generation docs](https://huggingface.co/tasks/text-generation)", - 'misuse': "The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model." - }, - 'question-answering': { - 'direct_use': "The model can be used for question answering.", - 'downstream_use': "Potential types of question answering include extractive QA, open generative QA, and closed generative QA. To learn more about this task and potential downstream uses, see the Hugging Face [question answering docs](https://huggingface.co/tasks/question-answering)", - 'misuse': "The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model." - }, - 'fill-mask': { - 'direct_use': "The model can be used for masked language modeling.", - 'downstream_use': "Masked language modeling are sometimes used to train large models for domain-specific problems. To learn more about this task and potential downstream uses, see the Hugging Face [fill mask docs](https://huggingface.co/tasks/fill-mask)", - 'misuse': "The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model." - }, - 'sentence_similarity': { - 'direct_use': "The model can be used for sentence similarity, the task of determining how similar two texts are.", - 'downstream_use': "Potential downstream use cases may include information retreival and clustering or grouping. To learn more about sentence similarity and potential downstream uses, see the Hugging Face [sentence similarity docs](https://huggingface.co/tasks/sentence-similarity)", - 'misuse': "" - }, - 'summarization': { - 'direct_use': "The model can be used for summarization.", - 'downstream_use': "To learn more about summarization and potential downstream uses, see the Hugging Face [summarization docs](https://huggingface.co/tasks/summarization).", - 'misuse': "The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model." - }, - 'text_classification': { - 'direct_use': "The model can be used for text classification, the task of assigning a label or class to a given text.", - 'downstream_use': "Potential downstream use cases include sentiment analysis, natural language inference, and assessing grammatical correctness. To learn more about text classification and other potential downstream uses, see the Hugging Face [text classification docs](https://huggingface.co/tasks/text-classification).", - 'misuse': "" - }, - 'token_classification': { - 'direct_use': "The model can be used for token classification, a natural language understanding task in which a label is assigned to some tokens in a text.", - 'downstream_use': "Potential downstream use cases include Named Entity Recognition (NER) and Part-of-Speech (PoS) tagging. To learn more about token classification and other potential downstream use cases, see the Hugging Face [token classification docs](https://huggingface.co/tasks/token-classification).", - 'misuse': "" - }, - 'translation': { - 'direct_use': "The model can be used for translation, the task of converting text from one language to another.", - 'downstream_use': "Potential downstream use cases include use cases that leverage conversational agents across different languages. To learn more about translation and other potential downstream use cases, see the Hugging Face [translation docs](https://huggingface.co/tasks/translation).", - 'misuse': "" - }, -} %} - -{% set task_list = [ - 'text_generation', - 'question_answering', - 'fill_mask', - 'sentence_similarity', - 'summarization', - 'text_classification', - 'token_classification', - 'translation' -] %} - - -# Model Card for {{ model_id }} - - -{{ the_model_description }} - -{% if model_card_user == "policymaker" %} -
            - Click to expand policymaker version of model card - -# Table of Contents - -1. [Model Details](#model-details) -2. [Uses](#uses) -3. [Bias, Risks, and Limitations](#bias-risks-and-limitations) -4. [Model Examination](#model-examination) -5. [Environmental Impact](#environmental-impact) -6. [Citation](#citation) -7. [Glossary](#glossary-optional) -8. [More Information](#more-information-optional) -9. [Model Card Authors](#model-card-authors-optional) -10. [Model Card Contact](#model-card-contact) - -
            - -{% endif %} - - -# Table of Contents - -- [Model Card for {{ model_id }}](#model-card-for--model_id-) -- [Table of Contents](#table-of-contents) -- [Table of Contents](#table-of-contents-1) -- [Model Details](#model-details) - - [Model Description](#model-description) -- [Uses](#uses) - - [Direct Use](#direct-use) - - [Downstream Use [Optional]](#downstream-use-optional) - - [Out-of-Scope Use](#out-of-scope-use) -- [Bias, Risks, and Limitations](#bias-risks-and-limitations) - - [Recommendations](#recommendations) -- [Training Details](#training-details) - - [Training Data](#training-data) - - [Training Procedure](#training-procedure) - - [Preprocessing](#preprocessing) - - [Speeds, Sizes, Times](#speeds-sizes-times) -- [Evaluation](#evaluation) - - [Testing Data, Factors & Metrics](#testing-data-factors--metrics) - - [Testing Data](#testing-data) - - [Factors](#factors) - - [Metrics](#metrics) - - [Results](#results) -- [Model Examination](#model-examination) -- [Environmental Impact](#environmental-impact) -- [Technical Specifications [optional]](#technical-specifications-optional) - - [Model Architecture and Objective](#model-architecture-and-objective) - - [Compute Infrastructure](#compute-infrastructure) - - [Hardware](#hardware) - - [Software](#software) -- [Citation](#citation) -- [Glossary [optional]](#glossary-optional) -- [More Information [optional]](#more-information-optional) -- [Model Card Authors [optional]](#model-card-authors-optional) -- [Model Card Contact](#model-card-contact) -- [How to Get Started with the Model](#how-to-get-started-with-the-model) - - -# Model Details - -## Model Description - - -{{ the_model_description }} - -- **Developed by:** {{ developers | join(', ') | default("More information needed", true)}} -- **Shared by [Optional]:** {{ shared_by | join(', ') | default("More information needed", true)}} -- **Model type:** {{ model_type | default("Language model", true)}} -- **Language(s) (NLP):** {{ language | join(', ') | default("More information needed", true)}} -- **License:** {{ model_license | default("More information needed", true)}} -- **Parent Model:** {{ " [Parent Model]({0})".format(repo_link) if parent_model_link else "More information needed"}} -- **Resources for more information:** {{ more_resources | default("More information needed", true)}} -{{ " - [GitHub Repo]({0})".format(repo_link) if repo_link}} -{{ " - [Associated Paper]({0})".format(paper_link) if paper_link }} - -# Uses - - - -## Direct Use - - - -{% if direct_use is defined %} -{{ direct_use }} -{% elif model_task in task_list %} -{{ lm_task_entries[model_task]['direct_use'] }} -{% else %} -More information needed. -{% endif %} - -## Downstream Use [Optional] - - - -{% if downstream_use is defined %} -{{ downstream_use }} -{% elif model_task in task_list %} -{{ lm_task_entries[model_task]['downstream_use'] }} -{% else %} -More information needed. -{% endif %} - -## Out-of-Scope Use - - - -{% if out_of_scope_use is defined %} -{{ out_of_scope_use }} -{% elif model_task in task_list %} -The model should not be used to intentionally create hostile or alienating environments for people. {{ lm_task_entries[model_task]['misuse'] }} -{% else %} -More information needed. -{% endif %} - -# Bias, Risks, and Limitations - - -{% if bias_risks_limiations is defined %} -{{ bias_risks_limitations }} -{% else %} -Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. -{% endif %} - -## Recommendations - - - -{% if bias_recommendations is defined %} -{{ bias_recommendations }} -{% else %} -Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recomendations. -{% endif %} - -# Training Details - -## Training Data - - - -{{ training_data | default("More information on training data needed", true)}} -{{ "See the associated [dataset card]({0}) for further details.".format(training_datacard_link) if training_data_card_link }} - -## Training Procedure - - - -### Preprocessing - -{{ preprocessing | default("More information needed", true)}} - -### Speeds, Sizes, Times - - - -{{ speeds_sizes_times | default("More information needed", true)}} - -# Evaluation - - - -## Testing Data, Factors & Metrics - -### Testing Data - - - -{{ testing_data | default("More information needed", true)}} -{{ "See the associated [dataset card]({0}) for further details.".format(testing_datacard_link) if testing_data_card_link }} - -### Factors - - - -{{ testing_factors | default("More information needed", true)}} - -### Metrics - - - -{{ testing_metrics | default("More information needed", true)}} - -## Results - -{{ results | default("More information needed", true)}} - -# Model Examination - -{{ model_examination | default("More information needed", true)}} - -# Environmental Impact - - - -Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - -- **Hardware Type:** {{ hardware | default("More information needed", true)}} -- **Hours used:** {{ hours_used | default("More information needed", true)}} -- **Cloud Provider:** {{ cloud_provider | default("More information needed", true)}} -- **Compute Region:** {{ cloud_region | default("More information needed", true)}} -- **Carbon Emitted:** {{ co2_emitted | default("More information needed", true)}} - -# Technical Specifications [optional] - -## Model Architecture and Objective - -{{ model_specs | default("More information needed", true)}} - -## Compute Infrastructure - -{{ compute_infrastructure | default("More information needed", true)}} - -### Hardware - -{{ hardware | default("More information needed", true)}} - -### Software - -{{ software | default("More information needed", true)}} - -# Citation - - - -**BibTeX:** - -{{ citation_bibtex | default("More information needed", true)}} - -**APA:** - -{{ citation_apa | default("More information needed", true)}} - -# Glossary [optional] - - - -{{ glossary | default("More information needed", true)}} - -# More Information [optional] - -{{ more_information | default("More information needed", true)}} - -# Model Card Authors [optional] - - - -{{ model_card_authors | join(', ') | default("More information needed", true)}} - -# Model Card Contact - -{{ model_card_contact | join(', ') | default("More information needed", true)}} - -# How to Get Started with the Model - -Use the code below to get started with the model. - -
            - Click to expand - -{{ get_started_code | default("More information needed", true)}} - -
            diff --git a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf42m_pfc02_16gpus_r100.py b/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf42m_pfc02_16gpus_r100.py deleted file mode 100644 index 035684732003b5c7b8fe8ea34e097bd22fbcca37..0000000000000000000000000000000000000000 --- a/spaces/hyxue/HiFiFace-inference-demo/arcface_torch/configs/wf42m_pfc02_16gpus_r100.py +++ /dev/null @@ -1,27 +0,0 @@ -from easydict import EasyDict as edict - -# make training faster -# our RAM is 256G -# mount -t tmpfs -o size=140G tmpfs /train_tmp - -config = edict() -config.margin_list = (1.0, 0.0, 0.4) -config.network = "r100" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 0.2 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 256 -config.lr = 0.3 -config.verbose = 2000 -config.dali = False - -config.rec = "/train_tmp/WebFace42M" -config.num_classes = 2059906 -config.num_image = 42474557 -config.num_epoch = 20 -config.warmup_epoch = 1 -config.val_targets = ["lfw", "cfp_fp", "agedb_30"] diff --git a/spaces/ifire/Architext_deployed/app.py b/spaces/ifire/Architext_deployed/app.py deleted file mode 100644 index 82140e873316aed9e8ef7010fcaf28c5a4892273..0000000000000000000000000000000000000000 --- a/spaces/ifire/Architext_deployed/app.py +++ /dev/null @@ -1,314 +0,0 @@ -from pathlib import Path -from num2words import num2words -import numpy as np -import os -import random -import re -import torch -import json -from shapely.geometry.polygon import Polygon -from shapely.affinity import scale -from PIL import Image, ImageDraw, ImageOps, ImageFilter, ImageFont, ImageColor - -os.system('pip3 install gradio==3.14.0') -import gradio as gr - -from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM -from transformers.models.gptj.modeling_gptj import apply_rotary_pos_emb as apply_rotary_pos_emb_pt - -tokenizer = AutoTokenizer.from_pretrained("architext/gptj-162M") -finetuned = AutoModelForCausalLM.from_pretrained("architext/gptj-162M") - -device = "cuda:0" if torch.cuda.is_available() else "cpu" -print(device) -finetuned = finetuned.to(device) - -# Utility functions - -def containsNumber(value): - for character in value: - if character.isdigit(): - return True - return False - -def creativity(intensity): - if(intensity == 'Low'): - top_p = 0.95 - top_k = 10 - elif(intensity == 'Medium'): - top_p = 0.9 - top_k = 50 - if(intensity == 'High'): - top_p = 0.85 - top_k = 100 - return top_p, top_k - -housegan_labels = {"living_room": 1, "kitchen": 2, "bedroom": 3, "bathroom": 4, "missing": 5, "closet": 6, - "balcony": 7, "hallway": 8, "dining_room": 9, "laundry_room": 10, "corridor": 8} -architext_colors = [[0, 0, 0], [249, 222, 182], [195, 209, 217], [250, 120, 128], [126, 202, 234], [190, 0, 198], [255, 255, 255], - [6, 53, 17], [17, 33, 58], [132, 151, 246], [197, 203, 159], [6, 53, 17],] - -regex = re.compile(".*?\((.*?)\)") - -def draw_polygons(polygons, colors, im_size=(512, 512), b_color="white", fpath=None): - image = Image.new("RGBA", im_size, color="white") - draw = ImageDraw.Draw(image) - for poly, color, in zip(polygons, colors): - #get initial polygon coordinates - xy = poly.exterior.xy - coords = np.dstack((xy[1], xy[0])).flatten() - # draw it on canvas, with the appropriate colors - draw.polygon(list(coords), fill=(0, 0, 0)) - #get inner polygon coordinates - small_poly = poly.buffer(-1, resolution=32, cap_style=2, join_style=2, mitre_limit=5.0) - if small_poly.geom_type == 'MultiPolygon': - mycoordslist = [list(x.exterior.coords) for x in small_poly] - for coord in mycoordslist: - coords = np.dstack((np.array(coord)[:,1], np.array(coord)[:, 0])).flatten() - draw.polygon(list(coords), fill=tuple(color)) - elif poly.geom_type == 'Polygon': - #get inner polygon coordinates - xy2 = small_poly.exterior.xy - coords2 = np.dstack((xy2[1], xy2[0])).flatten() - # draw it on canvas, with the appropriate colors - draw.polygon(list(coords2), fill=tuple(color)) - image = image.transpose(Image.FLIP_TOP_BOTTOM) - if(fpath): - image.save(fpath, quality=100, subsampling=0) - return draw, image - -def prompt_to_layout(user_prompt, intensity, fpath=None): - if(containsNumber(user_prompt) == True): - spaced_prompt = user_prompt.split(' ') - new_prompt = ' '.join([word if word.isdigit() == False else num2words(int(word)).lower() for word in spaced_prompt]) - model_prompt = '[User prompt] Hallways are adjacent to bedrooms. {} [Layout]'.format(new_prompt) - top_p, top_k = creativity(intensity) - model_prompt = '[User prompt] {} [Layout]'.format(user_prompt) - input_ids = tokenizer(model_prompt, return_tensors='pt').to(device) - output = finetuned.generate(**input_ids, do_sample=True, top_p=top_p, top_k=top_k, - eos_token_id=50256, max_length=400) - output = tokenizer.batch_decode(output, skip_special_tokens=True) - layout = output[0].split('[User prompt]')[1].split('[Layout] ')[1].split(', ') - spaces = [txt.split(':')[0] for txt in layout] - coords = [] - for txt in layout: - if ':' in txt: - split_txt = txt.split(':') - coords.append(split_txt[1].rstrip()) - coordinates = [re.findall(regex, coord) for coord in coords] - # Initialize an empty list to store the numerical coordinates - num_coords = [] - # Iterate over each coordinate in the coordinates list - for coord in coordinates: - temp = [] # Temporary list to store the cleaned numbers - # Split the coordinate into individual numbers - for xy in coord: - numbers = xy.split(',') - # Clean each number and convert it to an integer - for num in numbers: - clean_num = re.sub(r'^\D*|\D*$', '', num) # Remove non-digit characters - # Check if the cleaned number is a digit - if clean_num.isdigit(): - # Convert the cleaned number to an integer and divide it by 14.2 - # If division by zero occurs, skip this number - try: - temp.append(int(clean_num)/14.2) - except ZeroDivisionError: - continue # Skip this number and continue with the next one - - # Append the temporary list to the num_coords list - num_coords.append(temp) - - - new_spaces = [] - for i, v in enumerate(spaces): - totalcount = spaces.count(v) - count = spaces[:i].count(v) - new_spaces.append(v + str(count + 1) if totalcount > 1 else v) - - out_dict = dict(zip(new_spaces, num_coords)) - out_dict = json.dumps(out_dict) - - polygons = [] - for coord in coordinates: - polygons.append([point.split(',') for point in coord]) - geom = [] - for poly in polygons: - new_poly = [list(map(int, point)) for point in poly] - if len(new_poly) >= 4: - scaled_poly = scale(Polygon(new_poly), xfact=2, yfact=2, origin=(0,0)) - geom.append(scaled_poly) - colors: List[int] = [] - for space in spaces: - for key in housegan_labels.keys(): - if key in space: - colors.append(architext_colors[housegan_labels[key]]) - break - _, im = draw_polygons(geom, colors, fpath=fpath) - html = '' - legend = Image.open("labels.png") - imgs_comb = np.vstack([im, legend]) - imgs_comb = Image.fromarray(imgs_comb) - return imgs_comb, out_dict - - -# Gradio App - -custom_css=""" -@import url("https://use.typekit.net/nid3pfr.css"); -.gradio_wrapper .gradio_bg[is_embedded=false] { - min-height: 80%; -} - -.gradio_wrapper .gradio_bg[is_embedded=false] .gradio_page { - display: flex; - width: 100vw; - min-height: 50vh; - flex-direction: column; - justify-content: center; - align-items: center; - margin: 0px; - max-width: 100vw; - background: #FFFFFF; -} - -.gradio_wrapper .gradio_bg[is_embedded=false] .gradio_page .content { - padding: 0px; - margin: 0px; -} - -.gradio_interface { - width: 100vw; - max-width: 1500px; -} - -.gradio_interface .panel:nth-child(2) .component:nth-child(3) { - display:none -} - -.gradio_wrapper .gradio_bg[theme=default] .panel_buttons { - justify-content: flex-end; -} - -.gradio_wrapper .gradio_bg[theme=default] .panel_button { - flex: 0 0 0; - min-width: 150px; -} - -.gradio_wrapper .gradio_bg[theme=default] .gradio_interface .panel_button.submit { - background: #11213A; - border-radius: 5px; - color: #FFFFFF; - text-transform: uppercase; - min-width: 150px; - height: 4em; - letter-spacing: 0.15em; - flex: 0 0 0; -} -.gradio_wrapper .gradio_bg[theme=default] .gradio_interface .panel_button.submit:hover { - background: #000000; -} - -.input_text:focus { - border-color: #FA7880; -} -.gradio_wrapper .gradio_bg[theme=default] .gradio_interface .input_text input, -.gradio_wrapper .gradio_bg[theme=default] .gradio_interface .input_text textarea { - font: 200 45px garamond-premier-pro-display, serif; - line-height: 110%; - color: #11213A; - border-radius: 5px; - padding: 15px; - border: none; - background: #F2F4F4; -} -.input_text textarea:focus-visible { - outline: none; -} -.gradio_wrapper .gradio_bg[theme=default] .gradio_interface .input_radio .radio_item.selected { - background-color: #11213A; -} -.gradio_wrapper .gradio_bg[theme=default] .gradio_interface .input_radio .selected .radio_circle { - border-color: #4365c4; -} -.gradio_wrapper .gradio_bg[theme=default] .gradio_interface .output_image { - width: 100%; - height: 40vw; - max-height: 630px; -} -.gradio_wrapper .gradio_bg[theme=default] .gradio_interface .output_image .image_preview_holder { - background: transparent; -} -.panel:nth-child(1) { - margin-left: 50px; - margin-right: 50px; - margin-bottom: 80px; - max-width: 750px; -} -.panel { - background: transparent; -} -.gradio_wrapper .gradio_bg[theme=default] .gradio_interface .component_set { - background: transparent; - box-shadow: none; -} -.panel:nth-child(2) .gradio_wrapper .gradio_bg[theme=default] .gradio_interface .panel_header { - display: none; -} - -.gradio_wrapper .gradio_bg[is_embedded=false] .gradio_page .footer { - transform: scale(0.75); - filter: grayscale(1); -} - -.labels { - height: 20px; - width: auto; -} - -@media (max-width: 1000px){ - .panel:nth-child(1) { - margin-left: 0px; - margin-right: 0px; - } - .gradio_wrapper .gradio_bg[theme=default] .gradio_interface .output_image { - height: auto; - } -} -""" -creative_slider = gr.inputs.Radio(["Low", "Medium", "High"], default="Low", label='Creativity') -textbox = gr.inputs.Textbox(placeholder='An apartment with two bedrooms and one bathroom', lines="3", - label="DESCRIBE YOUR IDEAL APARTMENT") -generated = gr.outputs.Image(label='Generated Layout', type='numpy') -layout = gr.outputs.Textbox(label='Layout Coordinates') - -examples = [ - ["two bedrooms and two bathrooms", "Low"], - ["three bedrooms with a kitchen adjacent to the dining room", "Medium"] -] - - -def retry_prompt_to_layout(user_prompt, intensity, fpath=None): - max_attempts = 5 - attempts = 0 - - while attempts < max_attempts: - try: - # Call the original function - result = prompt_to_layout(user_prompt, intensity, fpath) - return result - except Exception as e: - print(f"Attempt {attempts+1} failed with error: {e}") - attempts += 1 - - -iface = gr.Interface(fn=retry_prompt_to_layout, inputs=[textbox, creative_slider], - outputs=[generated, layout], - css=custom_css, - theme="default", - allow_flagging='never', - allow_screenshot=False, - thumbnail="thumbnail_gradio.PNG", - examples=examples) - -iface.launch(enable_queue=True) \ No newline at end of file diff --git a/spaces/imperialwool/funapi/README.md b/spaces/imperialwool/funapi/README.md deleted file mode 100644 index 884aacb5cf70465515ead72135df26056cb7cae8..0000000000000000000000000000000000000000 --- a/spaces/imperialwool/funapi/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: funapi -emoji: 🚀 -colorFrom: blue -colorTo: indigo -sdk: docker -pinned: true ---- - -Public API for devs. \ No newline at end of file diff --git a/spaces/impulsewu/Real-CUGAN/app.py b/spaces/impulsewu/Real-CUGAN/app.py deleted file mode 100644 index 2439c5cec6b61e8a517f957daf710cbb6b5c3cf6..0000000000000000000000000000000000000000 --- a/spaces/impulsewu/Real-CUGAN/app.py +++ /dev/null @@ -1,62 +0,0 @@ -from upcunet_v3 import RealWaifuUpScaler -import gradio as gr -import time -import logging -import os -from PIL import ImageOps -import numpy as np -import math - - -def greet(input_img, input_model_name, input_tile_mode): - # if input_img.size[0] * input_img.size[1] > 256 * 256: - # y = int(math.sqrt(256*256/input_img.size[0]*input_img.size[1])) - # x = int(input_img.size[0]/input_img.size[1]*y) - # input_img = ImageOps.fit(input_img, (x, y)) - input_img = np.array(input_img) - if input_model_name not in model_cache: - t1 = time.time() - upscaler = RealWaifuUpScaler(input_model_name[2], ModelPath + input_model_name, half=False, device="cpu") - t2 = time.time() - logger.info(f'load model time, {t2 - t1}') - model_cache[input_model_name] = upscaler - else: - upscaler = model_cache[input_model_name] - logger.info(f'load model from cache') - - start = time.time() - result = upscaler(input_img, tile_mode=input_tile_mode) - end = time.time() - logger.info(f'input_model_name, {input_model_name}') - logger.info(f'input_tile_mode, {input_tile_mode}') - logger.info(f'input shape, {input_img.shape}') - logger.info(f'output shape, {result.shape}') - logger.info(f'speed time, {end - start}') - return result - - -if __name__ == '__main__': - logging.basicConfig(level=logging.INFO, format="[%(asctime)s] [%(process)d] [%(levelname)s] %(message)s") - logger = logging.getLogger() - - ModelPath = "weights_v3/" - model_cache = {} - - input_model_name = gr.inputs.Dropdown(os.listdir(ModelPath), default="up2x-latest-denoise2x.pth", label='选择model') - input_tile_mode = gr.inputs.Dropdown([0, 1, 2, 3, 4], default=2, label='选择tile_mode') - input_img = gr.inputs.Image(label='image', type='pil') - - inputs = [input_img, input_model_name, input_tile_mode] - outputs = "image" - iface = gr.Interface(fn=greet, - inputs=inputs, - outputs=outputs, - allow_screenshot=False, - allow_flagging='never', - examples=[['test-img.jpg', "up2x-latest-denoise2x.pth", 2]], - article='[https://github.com/bilibili/ailab/tree/main/Real-CUGAN](https://github.com/bilibili/ailab/tree/main/Real-CUGAN)
            ' - '感谢b站开源的项目,图片过大会导致内存不足,所有我将图片裁剪小,想体验大图片的效果请自行前往上面的链接。
            ' - '修改bbb' - 'The large image will lead to memory limit exceeded. So I crop and resize image. ' - 'If you want to experience the large image, please go to the link above.') - iface.launch() diff --git a/spaces/innnky/nyaru4.0/hubert/hubert_model_onnx.py b/spaces/innnky/nyaru4.0/hubert/hubert_model_onnx.py deleted file mode 100644 index d18f3c2a0fc29592a573a9780308d38f059640b9..0000000000000000000000000000000000000000 --- a/spaces/innnky/nyaru4.0/hubert/hubert_model_onnx.py +++ /dev/null @@ -1,217 +0,0 @@ -import copy -import random -from typing import Optional, Tuple - -import torch -import torch.nn as nn -import torch.nn.functional as t_func -from torch.nn.modules.utils import consume_prefix_in_state_dict_if_present - - -class Hubert(nn.Module): - def __init__(self, num_label_embeddings: int = 100, mask: bool = True): - super().__init__() - self._mask = mask - self.feature_extractor = FeatureExtractor() - self.feature_projection = FeatureProjection() - self.positional_embedding = PositionalConvEmbedding() - self.norm = nn.LayerNorm(768) - self.dropout = nn.Dropout(0.1) - self.encoder = TransformerEncoder( - nn.TransformerEncoderLayer( - 768, 12, 3072, activation="gelu", batch_first=True - ), - 12, - ) - self.proj = nn.Linear(768, 256) - - self.masked_spec_embed = nn.Parameter(torch.FloatTensor(768).uniform_()) - self.label_embedding = nn.Embedding(num_label_embeddings, 256) - - def mask(self, x: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - mask = None - if self.training and self._mask: - mask = _compute_mask((x.size(0), x.size(1)), 0.8, 10, x.device, 2) - x[mask] = self.masked_spec_embed.to(x.dtype) - return x, mask - - def encode( - self, x: torch.Tensor, layer: Optional[int] = None - ) -> Tuple[torch.Tensor, torch.Tensor]: - x = self.feature_extractor(x) - x = self.feature_projection(x.transpose(1, 2)) - x, mask = self.mask(x) - x = x + self.positional_embedding(x) - x = self.dropout(self.norm(x)) - x = self.encoder(x, output_layer=layer) - return x, mask - - def logits(self, x: torch.Tensor) -> torch.Tensor: - logits = torch.cosine_similarity( - x.unsqueeze(2), - self.label_embedding.weight.unsqueeze(0).unsqueeze(0), - dim=-1, - ) - return logits / 0.1 - - -class HubertSoft(Hubert): - def __init__(self): - super().__init__() - - def units(self, wav: torch.Tensor) -> torch.Tensor: - wav = t_func.pad(wav, ((400 - 320) // 2, (400 - 320) // 2)) - x, _ = self.encode(wav) - return self.proj(x) - - def forward(self, x): - return self.units(x) - -class FeatureExtractor(nn.Module): - def __init__(self): - super().__init__() - self.conv0 = nn.Conv1d(1, 512, 10, 5, bias=False) - self.norm0 = nn.GroupNorm(512, 512) - self.conv1 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv2 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv3 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv4 = nn.Conv1d(512, 512, 3, 2, bias=False) - self.conv5 = nn.Conv1d(512, 512, 2, 2, bias=False) - self.conv6 = nn.Conv1d(512, 512, 2, 2, bias=False) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = t_func.gelu(self.norm0(self.conv0(x))) - x = t_func.gelu(self.conv1(x)) - x = t_func.gelu(self.conv2(x)) - x = t_func.gelu(self.conv3(x)) - x = t_func.gelu(self.conv4(x)) - x = t_func.gelu(self.conv5(x)) - x = t_func.gelu(self.conv6(x)) - return x - - -class FeatureProjection(nn.Module): - def __init__(self): - super().__init__() - self.norm = nn.LayerNorm(512) - self.projection = nn.Linear(512, 768) - self.dropout = nn.Dropout(0.1) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.norm(x) - x = self.projection(x) - x = self.dropout(x) - return x - - -class PositionalConvEmbedding(nn.Module): - def __init__(self): - super().__init__() - self.conv = nn.Conv1d( - 768, - 768, - kernel_size=128, - padding=128 // 2, - groups=16, - ) - self.conv = nn.utils.weight_norm(self.conv, name="weight", dim=2) - - def forward(self, x: torch.Tensor) -> torch.Tensor: - x = self.conv(x.transpose(1, 2)) - x = t_func.gelu(x[:, :, :-1]) - return x.transpose(1, 2) - - -class TransformerEncoder(nn.Module): - def __init__( - self, encoder_layer: nn.TransformerEncoderLayer, num_layers: int - ) -> None: - super(TransformerEncoder, self).__init__() - self.layers = nn.ModuleList( - [copy.deepcopy(encoder_layer) for _ in range(num_layers)] - ) - self.num_layers = num_layers - - def forward( - self, - src: torch.Tensor, - mask: torch.Tensor = None, - src_key_padding_mask: torch.Tensor = None, - output_layer: Optional[int] = None, - ) -> torch.Tensor: - output = src - for layer in self.layers[:output_layer]: - output = layer( - output, src_mask=mask, src_key_padding_mask=src_key_padding_mask - ) - return output - - -def _compute_mask( - shape: Tuple[int, int], - mask_prob: float, - mask_length: int, - device: torch.device, - min_masks: int = 0, -) -> torch.Tensor: - batch_size, sequence_length = shape - - if mask_length < 1: - raise ValueError("`mask_length` has to be bigger than 0.") - - if mask_length > sequence_length: - raise ValueError( - f"`mask_length` has to be smaller than `sequence_length`, but got `mask_length`: {mask_length} and `sequence_length`: {sequence_length}`" - ) - - # compute number of masked spans in batch - num_masked_spans = int(mask_prob * sequence_length / mask_length + random.random()) - num_masked_spans = max(num_masked_spans, min_masks) - - # make sure num masked indices <= sequence_length - if num_masked_spans * mask_length > sequence_length: - num_masked_spans = sequence_length // mask_length - - # SpecAugment mask to fill - mask = torch.zeros((batch_size, sequence_length), device=device, dtype=torch.bool) - - # uniform distribution to sample from, make sure that offset samples are < sequence_length - uniform_dist = torch.ones( - (batch_size, sequence_length - (mask_length - 1)), device=device - ) - - # get random indices to mask - mask_indices = torch.multinomial(uniform_dist, num_masked_spans) - - # expand masked indices to masked spans - mask_indices = ( - mask_indices.unsqueeze(dim=-1) - .expand((batch_size, num_masked_spans, mask_length)) - .reshape(batch_size, num_masked_spans * mask_length) - ) - offsets = ( - torch.arange(mask_length, device=device)[None, None, :] - .expand((batch_size, num_masked_spans, mask_length)) - .reshape(batch_size, num_masked_spans * mask_length) - ) - mask_idxs = mask_indices + offsets - - # scatter indices to mask - mask = mask.scatter(1, mask_idxs, True) - - return mask - - -def hubert_soft( - path: str, -) -> HubertSoft: - r"""HuBERT-Soft from `"A Comparison of Discrete and Soft Speech Units for Improved Voice Conversion"`. - Args: - path (str): path of a pretrained model - """ - hubert = HubertSoft() - checkpoint = torch.load(path) - consume_prefix_in_state_dict_if_present(checkpoint, "module.") - hubert.load_state_dict(checkpoint) - hubert.eval() - return hubert diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Cubase764bitActivationcode2.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Cubase764bitActivationcode2.md deleted file mode 100644 index 32715868302176e14afa46cdda65caf15b8d2cf6..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Cubase764bitActivationcode2.md +++ /dev/null @@ -1,12 +0,0 @@ -

            Cubase764bitActivationcode2


            Download Zip ⚙⚙⚙ https://urlin.us/2uExMO



            - -Oct 2020 - - Jan 2022 - ... Autograss 103 For 3DSMAXrarrarCubase764bitActivationcode2Sons Of Ram Movie Download In 720p TorrentImvu Texture Extractor Full Version -. Dragons Rise Of Berk Download Free In 720p Torrent. -Racing Crashes And More At Saturdays Live Stream: The Road Rage Show. -Breachy Cops: The Road Rage Show. -Tuning And Repairing. -The Road Rage Show And More. -Tricks And Tips. -Racing Crashes And More At Saturdays Live Stream: The Road Rage Show. 8a78ff9644
            -
            -
            -

            diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Kabhi Khushi Kabhie Gham Part 1 Movie Download In Hindi.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Kabhi Khushi Kabhie Gham Part 1 Movie Download In Hindi.md deleted file mode 100644 index 6fbbc0aa38c38901c3085aeaeea3016d8cead52b..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Kabhi Khushi Kabhie Gham Part 1 Movie Download In Hindi.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Kabhi Khushi Kabhie Gham part 1 movie download in hindi


            Downloadhttps://urlin.us/2uEwnI



            -
            -Kabhi Khushi Kabhie Gham [HD] Hindi full Video The Movie 2020 | bahasa Indonesia ... Kabhi Khushi kabhie gham part 1. Kamal Mota. Download ... 1fdad05405
            -
            -
            -

            diff --git a/spaces/inreVtussa/clothingai/Examples/Cockos REAPER 5.977 [x86x64] BEST Keygen 21.4 MB.md b/spaces/inreVtussa/clothingai/Examples/Cockos REAPER 5.977 [x86x64] BEST Keygen 21.4 MB.md deleted file mode 100644 index ceed038d4db34172b44bf2e000539ec7117de636..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Cockos REAPER 5.977 [x86x64] BEST Keygen 21.4 MB.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Cockos REAPER 5.977 [x86x64] Keygen | 21.4 MB


            DOWNLOADhttps://tiurll.com/2uCkEs



            - - 1fdad05405
            -
            -
            -

            diff --git a/spaces/inreVtussa/clothingai/Examples/DUY Bundle 63 Native AU VST RTAS MAS Windows.md b/spaces/inreVtussa/clothingai/Examples/DUY Bundle 63 Native AU VST RTAS MAS Windows.md deleted file mode 100644 index d1ff08d10900bdd5e6d5e92bc409d33237267809..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/DUY Bundle 63 Native AU VST RTAS MAS Windows.md +++ /dev/null @@ -1,99 +0,0 @@ -
            -

            DUY Bundle 63 Native AU VST RTAS MAS Windows: A Complete Overview

            -

            If you are a music producer, composer, or sound engineer, you might have heard of DUY Bundle 63 Native AU VST RTAS MAS Windows. This is a collection of high-quality audio plugins that can enhance your sound production and creativity. In this article, we will give you a complete overview of what DUY Bundle 63 Native AU VST RTAS MAS Windows is, what it can do, and how to install and use it on your PC.

            -

            DUY Bundle 63 Native AU VST RTAS MAS windows


            Download File 🆓 https://tiurll.com/2uCksd



            -

            What is DUY Bundle 63 Native AU VST RTAS MAS Windows?

            -

            DUY Bundle 63 Native AU VST RTAS MAS Windows is a bundle of 63 audio plugins that are compatible with Windows operating systems. The plugins are designed to work with various audio formats and hosts, such as AU, VST, RTAS, and MAS. The plugins cover a wide range of audio processing functions, such as EQ, compression, reverb, delay, modulation, distortion, filtering, mastering, and more. The plugins are also optimized for low CPU usage and high performance.

            -

            What can DUY Bundle 63 Native AU VST RTAS MAS Windows do?

            -

            DUY Bundle 63 Native AU VST RTAS MAS Windows can help you achieve professional sound quality and creative results in your audio production. Whether you are working on music, film, TV, radio, or podcasting, you can find a plugin that suits your needs and preferences. Here are some examples of what DUY Bundle 63 Native AU VST RTAS MAS Windows can do:

            -
              -
            • Enhance the clarity, warmth, and punch of your mixes with DUY DaD Valve, DUY DaD Tape, DUY Wide, DUY Shape, and DUY MagicEQ.
            • -
            • Add depth, space, and dimension to your tracks with DUY Z-Room, DUY Dream Dynamics, DUY EverPack, and DUY Magic Spectrum.
            • -
            • Create rich and complex soundscapes with DUY DaD Watermarking System (DWMS), DUY SynthSpider, DUY GranuLab Pro (GPro), and DUY Magic Morph.
            • -
            • Add character and color to your sounds with DUY DaD Valve LE (VLE), DUY DaD Tape LE (TLE), DUY Analog Bundle (AB), and DUY Magic Spectralizer.
            • -
            • Master your final product with DUY MAX Duyimizer (MD), DUY Master Bundle (MB), DUY Essential Pack (EP), and DUY Magic Loudness.
            • -
            -

            How to install and use DUY Bundle 63 Native AU VST RTAS MAS Windows on your PC?

            -

            To install and use DUY Bundle 63 Native AU VST RTAS MAS Windows on your PC, you need to follow these steps:

            -
              -
            1. Download the installer from the official website or from the link provided below.
            2. -
            3. Run the installer and follow the instructions on the screen.
            4. -
            5. Select the plugins that you want to install and the formats that you want to use.
            6. -
            7. Activate the plugins with the serial number that you received after purchasing the bundle.
            8. -
            9. Launch your audio host application and scan for new plugins.
            10. -
            11. Load the plugins on your tracks or buses and adjust the parameters according to your taste.
            12. -
            -

            You can also access the user manuals and tutorials for each plugin from the installer or from the official website.

            -

            -

            Conclusion

            -

            DUY Bundle 63 Native AU VST RTAS MAS Windows is a powerful and versatile collection of audio plugins that can take your sound production to the next level. Whether you are a beginner or a professional, you can find a plugin that suits your needs and preferences. You can download the installer from the link below and start using the plugins today.

            -

            Download DUY Bundle 63 Native AU VST RTAS MAS Windows here

            -

            Why choose DUY Bundle 63 Native AU VST RTAS MAS Windows?

            -

            There are many reasons why you should choose DUY Bundle 63 Native AU VST RTAS MAS Windows for your audio production needs. Here are some of them:

            -
              -
            • DUY Bundle 63 Native AU VST RTAS MAS Windows offers you a wide range of audio plugins that can cover any sound processing function you need.
            • -
            • DUY Bundle 63 Native AU VST RTAS MAS Windows delivers high-quality sound and performance, thanks to its advanced algorithms and optimization techniques.
            • -
            • DUY Bundle 63 Native AU VST RTAS MAS Windows is compatible with most audio formats and hosts, such as AU, VST, RTAS, and MAS, so you can use it with your preferred software and hardware.
            • -
            • DUY Bundle 63 Native AU VST RTAS MAS Windows is easy to install and use, with a simple and intuitive interface and user manuals and tutorials for each plugin.
            • -
            • DUY Bundle 63 Native AU VST RTAS MAS Windows is affordable and cost-effective, as you get 63 plugins for the price of one.
            • -
            -

            How to get DUY Bundle 63 Native AU VST RTAS MAS Windows?

            -

            If you are interested in getting DUY Bundle 63 Native AU VST RTAS MAS Windows, you can do so by following these steps:

            -
              -
            1. Visit the official website of DUY or click on the link below to go to the product page.
            2. -
            3. Select the option to buy DUY Bundle 63 Native AU VST RTAS MAS Windows and proceed to checkout.
            4. -
            5. Enter your payment details and confirm your order.
            6. -
            7. Receive an email with your serial number and download link for the installer.
            8. -
            9. Download the installer and follow the instructions on how to install and activate the plugins.
            10. -
            11. Enjoy using DUY Bundle 63 Native AU VST RTAS MAS Windows on your PC.
            12. -
            -

            Buy DUY Bundle 63 Native AU VST RTAS MAS Windows here

            -

            What are the features and benefits of DUY Bundle 63 Native AU VST RTAS MAS Windows?

            -

            DUY Bundle 63 Native AU VST RTAS MAS Windows has many features and benefits that make it a valuable and reliable tool for audio production. Here are some of them:

            -
              -
            • DUY Bundle 63 Native AU VST RTAS MAS Windows has a user-friendly and intuitive interface that allows you to easily access and adjust the parameters of each plugin.
            • -
            • DUY Bundle 63 Native AU VST RTAS MAS Windows has a flexible and versatile design that allows you to customize and combine the plugins according to your needs and preferences.
            • -
            • DUY Bundle 63 Native AU VST RTAS MAS Windows has a high-quality and professional sound that can compete with the best audio plugins in the market.
            • -
            • DUY Bundle 63 Native AU VST RTAS MAS Windows has a low CPU usage and high performance that can handle complex and demanding audio projects.
            • -
            • DUY Bundle 63 Native AU VST RTAS MAS Windows has a lifetime license and free updates that ensure you always have the latest version of the plugins.
            • -
            -

            What are the reviews and testimonials of DUY Bundle 63 Native AU VST RTAS MAS Windows?

            -

            DUY Bundle 63 Native AU VST RTAS MAS Windows has received many positive reviews and testimonials from users and experts who have tried and tested the plugins. Here are some of them:

            -
            -

            "I have been using DUY Bundle 63 Native AU VST RTAS MAS Windows for a long time and I can say that it is one of the best audio plugins bundles I have ever used. The sound quality is amazing, the plugins are easy to use, and the performance is flawless. I highly recommend it to anyone who is looking for a professional and affordable audio solution." - John Smith, music producer

            -
            -
            -

            "DUY Bundle 63 Native AU VST RTAS MAS Windows is a must-have for any audio enthusiast. The plugins are versatile, powerful, and creative. They can transform any sound into something unique and impressive. I love how they work with any audio format and host, and how they are optimized for low CPU usage. DUY Bundle 63 Native AU VST RTAS MAS Windows is definitely worth every penny." - Jane Doe, sound engineer

            -
            -
            -

            "I am very impressed with DUY Bundle 63 Native AU VST RTAS MAS Windows. The plugins are amazing, they have everything I need for my audio production. The EQ, compression, reverb, delay, modulation, distortion, filtering, mastering, and more are all top-notch. The plugins are also very easy to install and use, and they come with user manuals and tutorials for each plugin. DUY Bundle 63 Native AU VST RTAS MAS Windows is a great investment for anyone who wants to improve their sound quality and creativity." - Bob Jones, composer

            -
            -

            What are the best practices and tips for using DUY Bundle 63 Native AU VST RTAS MAS Windows?

            -

            DUY Bundle 63 Native AU VST RTAS MAS Windows is a powerful and versatile tool that can help you achieve amazing results in your audio production. However, to get the most out of it, you need to follow some best practices and tips that can enhance your workflow and creativity. Here are some of them:

            -
              -
            • Read the user manuals and tutorials for each plugin to learn how to use them properly and effectively.
            • -
            • Experiment with different plugins and parameters to find the best combination for your sound and style.
            • -
            • Use presets as a starting point or inspiration, but don't rely on them too much. Customize and tweak them to suit your needs and preferences.
            • -
            • Use automation and modulation to add movement and variation to your sound.
            • -
            • Use buses and auxiliaries to apply plugins to multiple tracks or groups of tracks.
            • -
            • Use parallel processing to blend dry and wet signals for more control and flexibility.
            • -
            • Use A/B testing to compare different plugins or settings and choose the best one.
            • -
            • Use metering and analysis tools to monitor your levels, frequency spectrum, dynamics, phase, etc.
            • -
            • Use reference tracks to compare your sound with professional productions and learn from them.
            • -
            • Use headphones and monitors to check your sound on different devices and environments.
            • -
            -

            Where can you find more information and support for DUY Bundle 63 Native AU VST RTAS MAS Windows?

            -

            If you want to find more information and support for DUY Bundle 63 Native AU VST RTAS MAS Windows, you can visit the following sources:

            -
              -
            • The official website of DUY, where you can find product details, features, specifications, demos, videos, testimonials, FAQs, etc.
            • -
            • The official forum of DUY, where you can interact with other users and experts, ask questions, share tips, feedback, suggestions, etc.
            • -
            • The official blog of DUY, where you can find news, updates, articles, tutorials, tips, tricks, etc.
            • -
            • The official social media pages of DUY, where you can follow them on Facebook, Twitter, Instagram, YouTube, etc.
            • -
            • The official email support of DUY, where you can contact them directly for any technical or customer service issues.
            • -
            -

            You can also find more information and support for DUY Bundle 63 Native AU VST RTAS MAS Windows by searching online on various websites, blogs, forums, podcasts, magazines, etc.

            -

            Conclusion

            -

            DUY Bundle 63 Native AU VST RTAS MAS Windows is a comprehensive and versatile collection of audio plugins that can help you achieve professional and creative results in your audio production. Whether you are working on music, film, TV, radio, or podcasting, you can find a plugin that suits your needs and preferences. DUY Bundle 63 Native AU VST RTAS MAS Windows is compatible with most audio formats and hosts, such as AU, VST, RTAS, and MAS, and it is optimized for low CPU usage and high performance. DUY Bundle 63 Native AU VST RTAS MAS Windows is easy to install and use, and it comes with user manuals and tutorials for each plugin. DUY Bundle 63 Native AU VST RTAS MAS Windows is affordable and cost-effective, as you get 63 plugins for the price of one. DUY Bundle 63 Native AU VST RTAS MAS Windows is a great investment for anyone who wants to improve their sound quality and creativity. You can buy DUY Bundle 63 Native AU VST RTAS MAS Windows from the link below and start using the plugins today.

            -

            Buy DUY Bundle 63 Native AU VST RTAS MAS Windows here

            3cee63e6c2
            -
            -
            \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Delphi Xe 10 Crack By Unis.md b/spaces/inreVtussa/clothingai/Examples/Delphi Xe 10 Crack By Unis.md deleted file mode 100644 index 2668b7ccc71f1da56d143eaabba3ca973848ebdc..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Delphi Xe 10 Crack By Unis.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Delphi Xe 10 Crack By Unis


            Download Zip ✺✺✺ https://tiurll.com/2uCkCp



            - -17 item. General. Abstract. Over the years, the Hour of Code event has taken place in schools and communities in every country on Earth. It has involved literally millions of students and teachers. Today, it is organized by Code.org, a US non-profit. Next year the Hour of Code will be held in 70 countries around the world. Each country will host a different Hour of Code event. This year, Germany hosts the event on October 11th at 10am. Students aged 10-14 can learn to code by making a game or web app. Students can follow along with an on-screen tutor and a live coding session will take place over the internet. The tutor will teach students how to code in Python. Students will use their first-person, 2D physics game to explore how the code in a computer works. In this session, students will learn how to use the Pygame library to make a game that is controlled with the mouse. If you want to try out the resource that will be used in the Hour of Code event, click here. User:USu.Uni.gr (2019-02-10). computer. 2019-02-09. "UniGeneration" (March 2020) 16. Paper. The access, flexibility, and wide distribution of digital materials is one of the biggest advantages of the digital medium. In recent years, the appearance of a number of digital repositories has taken the work of archiving and disseminating archives to a whole new level. Archival repositories currently have a wide range of users that are interested in accessing collections of digital materials. Among others, archival repositories are often used as repositories of educational and training materials and as research portals, particularly for digital humanities. The librarians of the Department of Philosophy and History of Science, Ghent University Libraries (BIOSU) has founded the RESOLVER repository, an open access repository on the Belgian digital library platform, Open Library. RESOLVER is dedicated to digital scholarly, educational, and research materials from BIOSU. RESOLVER's goal is to provide a robust and open access platform for the presentation and sharing of research materials in the philosophy and history of science, with a focus on digital humanities and natural sciences. 2017-02-22. Computer Security and Software Engineering, L. 11 item. (2002) Electronics is a journal published monthly by Cambridge University Press. 2017-02-10. "UniGeneration" (March 2020) 17. Paper. Computer Science/Coding/Hour 4fefd39f24
            -
            -
            -

            diff --git a/spaces/itachi1234/rishu/README.md b/spaces/itachi1234/rishu/README.md deleted file mode 100644 index 23dde95eed77709530906f6d9cfe4fca6715b997..0000000000000000000000000000000000000000 --- a/spaces/itachi1234/rishu/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Rishu -emoji: 📊 -colorFrom: pink -colorTo: indigo -sdk: streamlit -sdk_version: 1.26.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jackli888/stable-diffusion-webui/modules/api/api.py b/spaces/jackli888/stable-diffusion-webui/modules/api/api.py deleted file mode 100644 index 5a9ac5f1aa745e4dd8c9ed5a107dd840f05c0ba6..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/modules/api/api.py +++ /dev/null @@ -1,551 +0,0 @@ -import base64 -import io -import time -import datetime -import uvicorn -from threading import Lock -from io import BytesIO -from gradio.processing_utils import decode_base64_to_file -from fastapi import APIRouter, Depends, FastAPI, HTTPException, Request, Response -from fastapi.security import HTTPBasic, HTTPBasicCredentials -from secrets import compare_digest - -import modules.shared as shared -from modules import sd_samplers, deepbooru, sd_hijack, images, scripts, ui, postprocessing -from modules.api.models import * -from modules.processing import StableDiffusionProcessingTxt2Img, StableDiffusionProcessingImg2Img, process_images -from modules.textual_inversion.textual_inversion import create_embedding, train_embedding -from modules.textual_inversion.preprocess import preprocess -from modules.hypernetworks.hypernetwork import create_hypernetwork, train_hypernetwork -from PIL import PngImagePlugin,Image -from modules.sd_models import checkpoints_list -from modules.sd_models_config import find_checkpoint_config_near_filename -from modules.realesrgan_model import get_realesrgan_models -from modules import devices -from typing import List -import piexif -import piexif.helper - -def upscaler_to_index(name: str): - try: - return [x.name.lower() for x in shared.sd_upscalers].index(name.lower()) - except: - raise HTTPException(status_code=400, detail=f"Invalid upscaler, needs to be one of these: {' , '.join([x.name for x in sd_upscalers])}") - -def script_name_to_index(name, scripts): - try: - return [script.title().lower() for script in scripts].index(name.lower()) - except: - raise HTTPException(status_code=422, detail=f"Script '{name}' not found") - -def validate_sampler_name(name): - config = sd_samplers.all_samplers_map.get(name, None) - if config is None: - raise HTTPException(status_code=404, detail="Sampler not found") - - return name - -def setUpscalers(req: dict): - reqDict = vars(req) - reqDict['extras_upscaler_1'] = reqDict.pop('upscaler_1', None) - reqDict['extras_upscaler_2'] = reqDict.pop('upscaler_2', None) - return reqDict - -def decode_base64_to_image(encoding): - if encoding.startswith("data:image/"): - encoding = encoding.split(";")[1].split(",")[1] - try: - image = Image.open(BytesIO(base64.b64decode(encoding))) - return image - except Exception as err: - raise HTTPException(status_code=500, detail="Invalid encoded image") - -def encode_pil_to_base64(image): - with io.BytesIO() as output_bytes: - - if opts.samples_format.lower() == 'png': - use_metadata = False - metadata = PngImagePlugin.PngInfo() - for key, value in image.info.items(): - if isinstance(key, str) and isinstance(value, str): - metadata.add_text(key, value) - use_metadata = True - image.save(output_bytes, format="PNG", pnginfo=(metadata if use_metadata else None), quality=opts.jpeg_quality) - - elif opts.samples_format.lower() in ("jpg", "jpeg", "webp"): - parameters = image.info.get('parameters', None) - exif_bytes = piexif.dump({ - "Exif": { piexif.ExifIFD.UserComment: piexif.helper.UserComment.dump(parameters or "", encoding="unicode") } - }) - if opts.samples_format.lower() in ("jpg", "jpeg"): - image.save(output_bytes, format="JPEG", exif = exif_bytes, quality=opts.jpeg_quality) - else: - image.save(output_bytes, format="WEBP", exif = exif_bytes, quality=opts.jpeg_quality) - - else: - raise HTTPException(status_code=500, detail="Invalid image format") - - bytes_data = output_bytes.getvalue() - - return base64.b64encode(bytes_data) - -def api_middleware(app: FastAPI): - @app.middleware("http") - async def log_and_time(req: Request, call_next): - ts = time.time() - res: Response = await call_next(req) - duration = str(round(time.time() - ts, 4)) - res.headers["X-Process-Time"] = duration - endpoint = req.scope.get('path', 'err') - if shared.cmd_opts.api_log and endpoint.startswith('/sdapi'): - print('API {t} {code} {prot}/{ver} {method} {endpoint} {cli} {duration}'.format( - t = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S.%f"), - code = res.status_code, - ver = req.scope.get('http_version', '0.0'), - cli = req.scope.get('client', ('0:0.0.0', 0))[0], - prot = req.scope.get('scheme', 'err'), - method = req.scope.get('method', 'err'), - endpoint = endpoint, - duration = duration, - )) - return res - - -class Api: - def __init__(self, app: FastAPI, queue_lock: Lock): - if shared.cmd_opts.api_auth: - self.credentials = dict() - for auth in shared.cmd_opts.api_auth.split(","): - user, password = auth.split(":") - self.credentials[user] = password - - self.router = APIRouter() - self.app = app - self.queue_lock = queue_lock - api_middleware(self.app) - self.add_api_route("/sdapi/v1/txt2img", self.text2imgapi, methods=["POST"], response_model=TextToImageResponse) - self.add_api_route("/sdapi/v1/img2img", self.img2imgapi, methods=["POST"], response_model=ImageToImageResponse) - self.add_api_route("/sdapi/v1/extra-single-image", self.extras_single_image_api, methods=["POST"], response_model=ExtrasSingleImageResponse) - self.add_api_route("/sdapi/v1/extra-batch-images", self.extras_batch_images_api, methods=["POST"], response_model=ExtrasBatchImagesResponse) - self.add_api_route("/sdapi/v1/png-info", self.pnginfoapi, methods=["POST"], response_model=PNGInfoResponse) - self.add_api_route("/sdapi/v1/progress", self.progressapi, methods=["GET"], response_model=ProgressResponse) - self.add_api_route("/sdapi/v1/interrogate", self.interrogateapi, methods=["POST"]) - self.add_api_route("/sdapi/v1/interrupt", self.interruptapi, methods=["POST"]) - self.add_api_route("/sdapi/v1/skip", self.skip, methods=["POST"]) - self.add_api_route("/sdapi/v1/options", self.get_config, methods=["GET"], response_model=OptionsModel) - self.add_api_route("/sdapi/v1/options", self.set_config, methods=["POST"]) - self.add_api_route("/sdapi/v1/cmd-flags", self.get_cmd_flags, methods=["GET"], response_model=FlagsModel) - self.add_api_route("/sdapi/v1/samplers", self.get_samplers, methods=["GET"], response_model=List[SamplerItem]) - self.add_api_route("/sdapi/v1/upscalers", self.get_upscalers, methods=["GET"], response_model=List[UpscalerItem]) - self.add_api_route("/sdapi/v1/sd-models", self.get_sd_models, methods=["GET"], response_model=List[SDModelItem]) - self.add_api_route("/sdapi/v1/hypernetworks", self.get_hypernetworks, methods=["GET"], response_model=List[HypernetworkItem]) - self.add_api_route("/sdapi/v1/face-restorers", self.get_face_restorers, methods=["GET"], response_model=List[FaceRestorerItem]) - self.add_api_route("/sdapi/v1/realesrgan-models", self.get_realesrgan_models, methods=["GET"], response_model=List[RealesrganItem]) - self.add_api_route("/sdapi/v1/prompt-styles", self.get_prompt_styles, methods=["GET"], response_model=List[PromptStyleItem]) - self.add_api_route("/sdapi/v1/embeddings", self.get_embeddings, methods=["GET"], response_model=EmbeddingsResponse) - self.add_api_route("/sdapi/v1/refresh-checkpoints", self.refresh_checkpoints, methods=["POST"]) - self.add_api_route("/sdapi/v1/create/embedding", self.create_embedding, methods=["POST"], response_model=CreateResponse) - self.add_api_route("/sdapi/v1/create/hypernetwork", self.create_hypernetwork, methods=["POST"], response_model=CreateResponse) - self.add_api_route("/sdapi/v1/preprocess", self.preprocess, methods=["POST"], response_model=PreprocessResponse) - self.add_api_route("/sdapi/v1/train/embedding", self.train_embedding, methods=["POST"], response_model=TrainResponse) - self.add_api_route("/sdapi/v1/train/hypernetwork", self.train_hypernetwork, methods=["POST"], response_model=TrainResponse) - self.add_api_route("/sdapi/v1/memory", self.get_memory, methods=["GET"], response_model=MemoryResponse) - - def add_api_route(self, path: str, endpoint, **kwargs): - if shared.cmd_opts.api_auth: - return self.app.add_api_route(path, endpoint, dependencies=[Depends(self.auth)], **kwargs) - return self.app.add_api_route(path, endpoint, **kwargs) - - def auth(self, credentials: HTTPBasicCredentials = Depends(HTTPBasic())): - if credentials.username in self.credentials: - if compare_digest(credentials.password, self.credentials[credentials.username]): - return True - - raise HTTPException(status_code=401, detail="Incorrect username or password", headers={"WWW-Authenticate": "Basic"}) - - def get_script(self, script_name, script_runner): - if script_name is None: - return None, None - - if not script_runner.scripts: - script_runner.initialize_scripts(False) - ui.create_ui() - - script_idx = script_name_to_index(script_name, script_runner.selectable_scripts) - script = script_runner.selectable_scripts[script_idx] - return script, script_idx - - def text2imgapi(self, txt2imgreq: StableDiffusionTxt2ImgProcessingAPI): - script, script_idx = self.get_script(txt2imgreq.script_name, scripts.scripts_txt2img) - - populate = txt2imgreq.copy(update={ # Override __init__ params - "sampler_name": validate_sampler_name(txt2imgreq.sampler_name or txt2imgreq.sampler_index), - "do_not_save_samples": True, - "do_not_save_grid": True - } - ) - if populate.sampler_name: - populate.sampler_index = None # prevent a warning later on - - args = vars(populate) - args.pop('script_name', None) - - with self.queue_lock: - p = StableDiffusionProcessingTxt2Img(sd_model=shared.sd_model, **args) - - shared.state.begin() - if script is not None: - p.outpath_grids = opts.outdir_txt2img_grids - p.outpath_samples = opts.outdir_txt2img_samples - p.script_args = [script_idx + 1] + [None] * (script.args_from - 1) + p.script_args - processed = scripts.scripts_txt2img.run(p, *p.script_args) - else: - processed = process_images(p) - shared.state.end() - - b64images = list(map(encode_pil_to_base64, processed.images)) - - return TextToImageResponse(images=b64images, parameters=vars(txt2imgreq), info=processed.js()) - - def img2imgapi(self, img2imgreq: StableDiffusionImg2ImgProcessingAPI): - init_images = img2imgreq.init_images - if init_images is None: - raise HTTPException(status_code=404, detail="Init image not found") - - script, script_idx = self.get_script(img2imgreq.script_name, scripts.scripts_img2img) - - mask = img2imgreq.mask - if mask: - mask = decode_base64_to_image(mask) - - populate = img2imgreq.copy(update={ # Override __init__ params - "sampler_name": validate_sampler_name(img2imgreq.sampler_name or img2imgreq.sampler_index), - "do_not_save_samples": True, - "do_not_save_grid": True, - "mask": mask - } - ) - if populate.sampler_name: - populate.sampler_index = None # prevent a warning later on - - args = vars(populate) - args.pop('include_init_images', None) # this is meant to be done by "exclude": True in model, but it's for a reason that I cannot determine. - args.pop('script_name', None) - - with self.queue_lock: - p = StableDiffusionProcessingImg2Img(sd_model=shared.sd_model, **args) - p.init_images = [decode_base64_to_image(x) for x in init_images] - - shared.state.begin() - if script is not None: - p.outpath_grids = opts.outdir_img2img_grids - p.outpath_samples = opts.outdir_img2img_samples - p.script_args = [script_idx + 1] + [None] * (script.args_from - 1) + p.script_args - processed = scripts.scripts_img2img.run(p, *p.script_args) - else: - processed = process_images(p) - shared.state.end() - - b64images = list(map(encode_pil_to_base64, processed.images)) - - if not img2imgreq.include_init_images: - img2imgreq.init_images = None - img2imgreq.mask = None - - return ImageToImageResponse(images=b64images, parameters=vars(img2imgreq), info=processed.js()) - - def extras_single_image_api(self, req: ExtrasSingleImageRequest): - reqDict = setUpscalers(req) - - reqDict['image'] = decode_base64_to_image(reqDict['image']) - - with self.queue_lock: - result = postprocessing.run_extras(extras_mode=0, image_folder="", input_dir="", output_dir="", save_output=False, **reqDict) - - return ExtrasSingleImageResponse(image=encode_pil_to_base64(result[0][0]), html_info=result[1]) - - def extras_batch_images_api(self, req: ExtrasBatchImagesRequest): - reqDict = setUpscalers(req) - - def prepareFiles(file): - file = decode_base64_to_file(file.data, file_path=file.name) - file.orig_name = file.name - return file - - reqDict['image_folder'] = list(map(prepareFiles, reqDict['imageList'])) - reqDict.pop('imageList') - - with self.queue_lock: - result = postprocessing.run_extras(extras_mode=1, image="", input_dir="", output_dir="", save_output=False, **reqDict) - - return ExtrasBatchImagesResponse(images=list(map(encode_pil_to_base64, result[0])), html_info=result[1]) - - def pnginfoapi(self, req: PNGInfoRequest): - if(not req.image.strip()): - return PNGInfoResponse(info="") - - image = decode_base64_to_image(req.image.strip()) - if image is None: - return PNGInfoResponse(info="") - - geninfo, items = images.read_info_from_image(image) - if geninfo is None: - geninfo = "" - - items = {**{'parameters': geninfo}, **items} - - return PNGInfoResponse(info=geninfo, items=items) - - def progressapi(self, req: ProgressRequest = Depends()): - # copy from check_progress_call of ui.py - - if shared.state.job_count == 0: - return ProgressResponse(progress=0, eta_relative=0, state=shared.state.dict(), textinfo=shared.state.textinfo) - - # avoid dividing zero - progress = 0.01 - - if shared.state.job_count > 0: - progress += shared.state.job_no / shared.state.job_count - if shared.state.sampling_steps > 0: - progress += 1 / shared.state.job_count * shared.state.sampling_step / shared.state.sampling_steps - - time_since_start = time.time() - shared.state.time_start - eta = (time_since_start/progress) - eta_relative = eta-time_since_start - - progress = min(progress, 1) - - shared.state.set_current_image() - - current_image = None - if shared.state.current_image and not req.skip_current_image: - current_image = encode_pil_to_base64(shared.state.current_image) - - return ProgressResponse(progress=progress, eta_relative=eta_relative, state=shared.state.dict(), current_image=current_image, textinfo=shared.state.textinfo) - - def interrogateapi(self, interrogatereq: InterrogateRequest): - image_b64 = interrogatereq.image - if image_b64 is None: - raise HTTPException(status_code=404, detail="Image not found") - - img = decode_base64_to_image(image_b64) - img = img.convert('RGB') - - # Override object param - with self.queue_lock: - if interrogatereq.model == "clip": - processed = shared.interrogator.interrogate(img) - elif interrogatereq.model == "deepdanbooru": - processed = deepbooru.model.tag(img) - else: - raise HTTPException(status_code=404, detail="Model not found") - - return InterrogateResponse(caption=processed) - - def interruptapi(self): - shared.state.interrupt() - - return {} - - def skip(self): - shared.state.skip() - - def get_config(self): - options = {} - for key in shared.opts.data.keys(): - metadata = shared.opts.data_labels.get(key) - if(metadata is not None): - options.update({key: shared.opts.data.get(key, shared.opts.data_labels.get(key).default)}) - else: - options.update({key: shared.opts.data.get(key, None)}) - - return options - - def set_config(self, req: Dict[str, Any]): - for k, v in req.items(): - shared.opts.set(k, v) - - shared.opts.save(shared.config_filename) - return - - def get_cmd_flags(self): - return vars(shared.cmd_opts) - - def get_samplers(self): - return [{"name": sampler[0], "aliases":sampler[2], "options":sampler[3]} for sampler in sd_samplers.all_samplers] - - def get_upscalers(self): - return [ - { - "name": upscaler.name, - "model_name": upscaler.scaler.model_name, - "model_path": upscaler.data_path, - "model_url": None, - "scale": upscaler.scale, - } - for upscaler in shared.sd_upscalers - ] - - def get_sd_models(self): - return [{"title": x.title, "model_name": x.model_name, "hash": x.shorthash, "sha256": x.sha256, "filename": x.filename, "config": find_checkpoint_config_near_filename(x)} for x in checkpoints_list.values()] - - def get_hypernetworks(self): - return [{"name": name, "path": shared.hypernetworks[name]} for name in shared.hypernetworks] - - def get_face_restorers(self): - return [{"name":x.name(), "cmd_dir": getattr(x, "cmd_dir", None)} for x in shared.face_restorers] - - def get_realesrgan_models(self): - return [{"name":x.name,"path":x.data_path, "scale":x.scale} for x in get_realesrgan_models(None)] - - def get_prompt_styles(self): - styleList = [] - for k in shared.prompt_styles.styles: - style = shared.prompt_styles.styles[k] - styleList.append({"name":style[0], "prompt": style[1], "negative_prompt": style[2]}) - - return styleList - - def get_embeddings(self): - db = sd_hijack.model_hijack.embedding_db - - def convert_embedding(embedding): - return { - "step": embedding.step, - "sd_checkpoint": embedding.sd_checkpoint, - "sd_checkpoint_name": embedding.sd_checkpoint_name, - "shape": embedding.shape, - "vectors": embedding.vectors, - } - - def convert_embeddings(embeddings): - return {embedding.name: convert_embedding(embedding) for embedding in embeddings.values()} - - return { - "loaded": convert_embeddings(db.word_embeddings), - "skipped": convert_embeddings(db.skipped_embeddings), - } - - def refresh_checkpoints(self): - shared.refresh_checkpoints() - - def create_embedding(self, args: dict): - try: - shared.state.begin() - filename = create_embedding(**args) # create empty embedding - sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings() # reload embeddings so new one can be immediately used - shared.state.end() - return CreateResponse(info = "create embedding filename: {filename}".format(filename = filename)) - except AssertionError as e: - shared.state.end() - return TrainResponse(info = "create embedding error: {error}".format(error = e)) - - def create_hypernetwork(self, args: dict): - try: - shared.state.begin() - filename = create_hypernetwork(**args) # create empty embedding - shared.state.end() - return CreateResponse(info = "create hypernetwork filename: {filename}".format(filename = filename)) - except AssertionError as e: - shared.state.end() - return TrainResponse(info = "create hypernetwork error: {error}".format(error = e)) - - def preprocess(self, args: dict): - try: - shared.state.begin() - preprocess(**args) # quick operation unless blip/booru interrogation is enabled - shared.state.end() - return PreprocessResponse(info = 'preprocess complete') - except KeyError as e: - shared.state.end() - return PreprocessResponse(info = "preprocess error: invalid token: {error}".format(error = e)) - except AssertionError as e: - shared.state.end() - return PreprocessResponse(info = "preprocess error: {error}".format(error = e)) - except FileNotFoundError as e: - shared.state.end() - return PreprocessResponse(info = 'preprocess error: {error}'.format(error = e)) - - def train_embedding(self, args: dict): - try: - shared.state.begin() - apply_optimizations = shared.opts.training_xattention_optimizations - error = None - filename = '' - if not apply_optimizations: - sd_hijack.undo_optimizations() - try: - embedding, filename = train_embedding(**args) # can take a long time to complete - except Exception as e: - error = e - finally: - if not apply_optimizations: - sd_hijack.apply_optimizations() - shared.state.end() - return TrainResponse(info = "train embedding complete: filename: {filename} error: {error}".format(filename = filename, error = error)) - except AssertionError as msg: - shared.state.end() - return TrainResponse(info = "train embedding error: {msg}".format(msg = msg)) - - def train_hypernetwork(self, args: dict): - try: - shared.state.begin() - shared.loaded_hypernetworks = [] - apply_optimizations = shared.opts.training_xattention_optimizations - error = None - filename = '' - if not apply_optimizations: - sd_hijack.undo_optimizations() - try: - hypernetwork, filename = train_hypernetwork(**args) - except Exception as e: - error = e - finally: - shared.sd_model.cond_stage_model.to(devices.device) - shared.sd_model.first_stage_model.to(devices.device) - if not apply_optimizations: - sd_hijack.apply_optimizations() - shared.state.end() - return TrainResponse(info="train embedding complete: filename: {filename} error: {error}".format(filename=filename, error=error)) - except AssertionError as msg: - shared.state.end() - return TrainResponse(info="train embedding error: {error}".format(error=error)) - - def get_memory(self): - try: - import os, psutil - process = psutil.Process(os.getpid()) - res = process.memory_info() # only rss is cross-platform guaranteed so we dont rely on other values - ram_total = 100 * res.rss / process.memory_percent() # and total memory is calculated as actual value is not cross-platform safe - ram = { 'free': ram_total - res.rss, 'used': res.rss, 'total': ram_total } - except Exception as err: - ram = { 'error': f'{err}' } - try: - import torch - if torch.cuda.is_available(): - s = torch.cuda.mem_get_info() - system = { 'free': s[0], 'used': s[1] - s[0], 'total': s[1] } - s = dict(torch.cuda.memory_stats(shared.device)) - allocated = { 'current': s['allocated_bytes.all.current'], 'peak': s['allocated_bytes.all.peak'] } - reserved = { 'current': s['reserved_bytes.all.current'], 'peak': s['reserved_bytes.all.peak'] } - active = { 'current': s['active_bytes.all.current'], 'peak': s['active_bytes.all.peak'] } - inactive = { 'current': s['inactive_split_bytes.all.current'], 'peak': s['inactive_split_bytes.all.peak'] } - warnings = { 'retries': s['num_alloc_retries'], 'oom': s['num_ooms'] } - cuda = { - 'system': system, - 'active': active, - 'allocated': allocated, - 'reserved': reserved, - 'inactive': inactive, - 'events': warnings, - } - else: - cuda = { 'error': 'unavailable' } - except Exception as err: - cuda = { 'error': f'{err}' } - return MemoryResponse(ram = ram, cuda = cuda) - - def launch(self, server_name, port): - self.app.include_router(self.router) - uvicorn.run(self.app, host=server_name, port=port) diff --git a/spaces/jackli888/stable-diffusion-webui/modules/prompt_parser.py b/spaces/jackli888/stable-diffusion-webui/modules/prompt_parser.py deleted file mode 100644 index a7bbfa4ea73cbfcb6da0e1012ac166042b6fae08..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/modules/prompt_parser.py +++ /dev/null @@ -1,373 +0,0 @@ -import re -from collections import namedtuple -from typing import List -import lark - -# a prompt like this: "fantasy landscape with a [mountain:lake:0.25] and [an oak:a christmas tree:0.75][ in foreground::0.6][ in background:0.25] [shoddy:masterful:0.5]" -# will be represented with prompt_schedule like this (assuming steps=100): -# [25, 'fantasy landscape with a mountain and an oak in foreground shoddy'] -# [50, 'fantasy landscape with a lake and an oak in foreground in background shoddy'] -# [60, 'fantasy landscape with a lake and an oak in foreground in background masterful'] -# [75, 'fantasy landscape with a lake and an oak in background masterful'] -# [100, 'fantasy landscape with a lake and a christmas tree in background masterful'] - -schedule_parser = lark.Lark(r""" -!start: (prompt | /[][():]/+)* -prompt: (emphasized | scheduled | alternate | plain | WHITESPACE)* -!emphasized: "(" prompt ")" - | "(" prompt ":" prompt ")" - | "[" prompt "]" -scheduled: "[" [prompt ":"] prompt ":" [WHITESPACE] NUMBER "]" -alternate: "[" prompt ("|" prompt)+ "]" -WHITESPACE: /\s+/ -plain: /([^\\\[\]():|]|\\.)+/ -%import common.SIGNED_NUMBER -> NUMBER -""") - -def get_learned_conditioning_prompt_schedules(prompts, steps): - """ - >>> g = lambda p: get_learned_conditioning_prompt_schedules([p], 10)[0] - >>> g("test") - [[10, 'test']] - >>> g("a [b:3]") - [[3, 'a '], [10, 'a b']] - >>> g("a [b: 3]") - [[3, 'a '], [10, 'a b']] - >>> g("a [[[b]]:2]") - [[2, 'a '], [10, 'a [[b]]']] - >>> g("[(a:2):3]") - [[3, ''], [10, '(a:2)']] - >>> g("a [b : c : 1] d") - [[1, 'a b d'], [10, 'a c d']] - >>> g("a[b:[c:d:2]:1]e") - [[1, 'abe'], [2, 'ace'], [10, 'ade']] - >>> g("a [unbalanced") - [[10, 'a [unbalanced']] - >>> g("a [b:.5] c") - [[5, 'a c'], [10, 'a b c']] - >>> g("a [{b|d{:.5] c") # not handling this right now - [[5, 'a c'], [10, 'a {b|d{ c']] - >>> g("((a][:b:c [d:3]") - [[3, '((a][:b:c '], [10, '((a][:b:c d']] - >>> g("[a|(b:1.1)]") - [[1, 'a'], [2, '(b:1.1)'], [3, 'a'], [4, '(b:1.1)'], [5, 'a'], [6, '(b:1.1)'], [7, 'a'], [8, '(b:1.1)'], [9, 'a'], [10, '(b:1.1)']] - """ - - def collect_steps(steps, tree): - l = [steps] - class CollectSteps(lark.Visitor): - def scheduled(self, tree): - tree.children[-1] = float(tree.children[-1]) - if tree.children[-1] < 1: - tree.children[-1] *= steps - tree.children[-1] = min(steps, int(tree.children[-1])) - l.append(tree.children[-1]) - def alternate(self, tree): - l.extend(range(1, steps+1)) - CollectSteps().visit(tree) - return sorted(set(l)) - - def at_step(step, tree): - class AtStep(lark.Transformer): - def scheduled(self, args): - before, after, _, when = args - yield before or () if step <= when else after - def alternate(self, args): - yield next(args[(step - 1)%len(args)]) - def start(self, args): - def flatten(x): - if type(x) == str: - yield x - else: - for gen in x: - yield from flatten(gen) - return ''.join(flatten(args)) - def plain(self, args): - yield args[0].value - def __default__(self, data, children, meta): - for child in children: - yield child - return AtStep().transform(tree) - - def get_schedule(prompt): - try: - tree = schedule_parser.parse(prompt) - except lark.exceptions.LarkError as e: - if 0: - import traceback - traceback.print_exc() - return [[steps, prompt]] - return [[t, at_step(t, tree)] for t in collect_steps(steps, tree)] - - promptdict = {prompt: get_schedule(prompt) for prompt in set(prompts)} - return [promptdict[prompt] for prompt in prompts] - - -ScheduledPromptConditioning = namedtuple("ScheduledPromptConditioning", ["end_at_step", "cond"]) - - -def get_learned_conditioning(model, prompts, steps): - """converts a list of prompts into a list of prompt schedules - each schedule is a list of ScheduledPromptConditioning, specifying the comdition (cond), - and the sampling step at which this condition is to be replaced by the next one. - - Input: - (model, ['a red crown', 'a [blue:green:5] jeweled crown'], 20) - - Output: - [ - [ - ScheduledPromptConditioning(end_at_step=20, cond=tensor([[-0.3886, 0.0229, -0.0523, ..., -0.4901, -0.3066, 0.0674], ..., [ 0.3317, -0.5102, -0.4066, ..., 0.4119, -0.7647, -1.0160]], device='cuda:0')) - ], - [ - ScheduledPromptConditioning(end_at_step=5, cond=tensor([[-0.3886, 0.0229, -0.0522, ..., -0.4901, -0.3067, 0.0673], ..., [-0.0192, 0.3867, -0.4644, ..., 0.1135, -0.3696, -0.4625]], device='cuda:0')), - ScheduledPromptConditioning(end_at_step=20, cond=tensor([[-0.3886, 0.0229, -0.0522, ..., -0.4901, -0.3067, 0.0673], ..., [-0.7352, -0.4356, -0.7888, ..., 0.6994, -0.4312, -1.2593]], device='cuda:0')) - ] - ] - """ - res = [] - - prompt_schedules = get_learned_conditioning_prompt_schedules(prompts, steps) - cache = {} - - for prompt, prompt_schedule in zip(prompts, prompt_schedules): - - cached = cache.get(prompt, None) - if cached is not None: - res.append(cached) - continue - - texts = [x[1] for x in prompt_schedule] - conds = model.get_learned_conditioning(texts) - - cond_schedule = [] - for i, (end_at_step, text) in enumerate(prompt_schedule): - cond_schedule.append(ScheduledPromptConditioning(end_at_step, conds[i])) - - cache[prompt] = cond_schedule - res.append(cond_schedule) - - return res - - -re_AND = re.compile(r"\bAND\b") -re_weight = re.compile(r"^(.*?)(?:\s*:\s*([-+]?(?:\d+\.?|\d*\.\d+)))?\s*$") - -def get_multicond_prompt_list(prompts): - res_indexes = [] - - prompt_flat_list = [] - prompt_indexes = {} - - for prompt in prompts: - subprompts = re_AND.split(prompt) - - indexes = [] - for subprompt in subprompts: - match = re_weight.search(subprompt) - - text, weight = match.groups() if match is not None else (subprompt, 1.0) - - weight = float(weight) if weight is not None else 1.0 - - index = prompt_indexes.get(text, None) - if index is None: - index = len(prompt_flat_list) - prompt_flat_list.append(text) - prompt_indexes[text] = index - - indexes.append((index, weight)) - - res_indexes.append(indexes) - - return res_indexes, prompt_flat_list, prompt_indexes - - -class ComposableScheduledPromptConditioning: - def __init__(self, schedules, weight=1.0): - self.schedules: List[ScheduledPromptConditioning] = schedules - self.weight: float = weight - - -class MulticondLearnedConditioning: - def __init__(self, shape, batch): - self.shape: tuple = shape # the shape field is needed to send this object to DDIM/PLMS - self.batch: List[List[ComposableScheduledPromptConditioning]] = batch - -def get_multicond_learned_conditioning(model, prompts, steps) -> MulticondLearnedConditioning: - """same as get_learned_conditioning, but returns a list of ScheduledPromptConditioning along with the weight objects for each prompt. - For each prompt, the list is obtained by splitting the prompt using the AND separator. - - https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/ - """ - - res_indexes, prompt_flat_list, prompt_indexes = get_multicond_prompt_list(prompts) - - learned_conditioning = get_learned_conditioning(model, prompt_flat_list, steps) - - res = [] - for indexes in res_indexes: - res.append([ComposableScheduledPromptConditioning(learned_conditioning[i], weight) for i, weight in indexes]) - - return MulticondLearnedConditioning(shape=(len(prompts),), batch=res) - - -def reconstruct_cond_batch(c: List[List[ScheduledPromptConditioning]], current_step): - param = c[0][0].cond - res = torch.zeros((len(c),) + param.shape, device=param.device, dtype=param.dtype) - for i, cond_schedule in enumerate(c): - target_index = 0 - for current, (end_at, cond) in enumerate(cond_schedule): - if current_step <= end_at: - target_index = current - break - res[i] = cond_schedule[target_index].cond - - return res - - -def reconstruct_multicond_batch(c: MulticondLearnedConditioning, current_step): - param = c.batch[0][0].schedules[0].cond - - tensors = [] - conds_list = [] - - for batch_no, composable_prompts in enumerate(c.batch): - conds_for_batch = [] - - for cond_index, composable_prompt in enumerate(composable_prompts): - target_index = 0 - for current, (end_at, cond) in enumerate(composable_prompt.schedules): - if current_step <= end_at: - target_index = current - break - - conds_for_batch.append((len(tensors), composable_prompt.weight)) - tensors.append(composable_prompt.schedules[target_index].cond) - - conds_list.append(conds_for_batch) - - # if prompts have wildly different lengths above the limit we'll get tensors fo different shapes - # and won't be able to torch.stack them. So this fixes that. - token_count = max([x.shape[0] for x in tensors]) - for i in range(len(tensors)): - if tensors[i].shape[0] != token_count: - last_vector = tensors[i][-1:] - last_vector_repeated = last_vector.repeat([token_count - tensors[i].shape[0], 1]) - tensors[i] = torch.vstack([tensors[i], last_vector_repeated]) - - return conds_list, torch.stack(tensors).to(device=param.device, dtype=param.dtype) - - -re_attention = re.compile(r""" -\\\(| -\\\)| -\\\[| -\\]| -\\\\| -\\| -\(| -\[| -:([+-]?[.\d]+)\)| -\)| -]| -[^\\()\[\]:]+| -: -""", re.X) - -re_break = re.compile(r"\s*\bBREAK\b\s*", re.S) - -def parse_prompt_attention(text): - """ - Parses a string with attention tokens and returns a list of pairs: text and its associated weight. - Accepted tokens are: - (abc) - increases attention to abc by a multiplier of 1.1 - (abc:3.12) - increases attention to abc by a multiplier of 3.12 - [abc] - decreases attention to abc by a multiplier of 1.1 - \( - literal character '(' - \[ - literal character '[' - \) - literal character ')' - \] - literal character ']' - \\ - literal character '\' - anything else - just text - - >>> parse_prompt_attention('normal text') - [['normal text', 1.0]] - >>> parse_prompt_attention('an (important) word') - [['an ', 1.0], ['important', 1.1], [' word', 1.0]] - >>> parse_prompt_attention('(unbalanced') - [['unbalanced', 1.1]] - >>> parse_prompt_attention('\(literal\]') - [['(literal]', 1.0]] - >>> parse_prompt_attention('(unnecessary)(parens)') - [['unnecessaryparens', 1.1]] - >>> parse_prompt_attention('a (((house:1.3)) [on] a (hill:0.5), sun, (((sky))).') - [['a ', 1.0], - ['house', 1.5730000000000004], - [' ', 1.1], - ['on', 1.0], - [' a ', 1.1], - ['hill', 0.55], - [', sun, ', 1.1], - ['sky', 1.4641000000000006], - ['.', 1.1]] - """ - - res = [] - round_brackets = [] - square_brackets = [] - - round_bracket_multiplier = 1.1 - square_bracket_multiplier = 1 / 1.1 - - def multiply_range(start_position, multiplier): - for p in range(start_position, len(res)): - res[p][1] *= multiplier - - for m in re_attention.finditer(text): - text = m.group(0) - weight = m.group(1) - - if text.startswith('\\'): - res.append([text[1:], 1.0]) - elif text == '(': - round_brackets.append(len(res)) - elif text == '[': - square_brackets.append(len(res)) - elif weight is not None and len(round_brackets) > 0: - multiply_range(round_brackets.pop(), float(weight)) - elif text == ')' and len(round_brackets) > 0: - multiply_range(round_brackets.pop(), round_bracket_multiplier) - elif text == ']' and len(square_brackets) > 0: - multiply_range(square_brackets.pop(), square_bracket_multiplier) - else: - parts = re.split(re_break, text) - for i, part in enumerate(parts): - if i > 0: - res.append(["BREAK", -1]) - res.append([part, 1.0]) - - for pos in round_brackets: - multiply_range(pos, round_bracket_multiplier) - - for pos in square_brackets: - multiply_range(pos, square_bracket_multiplier) - - if len(res) == 0: - res = [["", 1.0]] - - # merge runs of identical weights - i = 0 - while i + 1 < len(res): - if res[i][1] == res[i + 1][1]: - res[i][0] += res[i + 1][0] - res.pop(i + 1) - else: - i += 1 - - return res - -if __name__ == "__main__": - import doctest - doctest.testmod(optionflags=doctest.NORMALIZE_WHITESPACE) -else: - import torch # doctest faster diff --git a/spaces/james-oldfield/PandA/networks/stylegan3/viz/equivariance_widget.py b/spaces/james-oldfield/PandA/networks/stylegan3/viz/equivariance_widget.py deleted file mode 100644 index d961e82a581fb9ce2254e8163bade1ec34a8b139..0000000000000000000000000000000000000000 --- a/spaces/james-oldfield/PandA/networks/stylegan3/viz/equivariance_widget.py +++ /dev/null @@ -1,115 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import numpy as np -import imgui -import dnnlib -from gui_utils import imgui_utils - -#---------------------------------------------------------------------------- - -class EquivarianceWidget: - def __init__(self, viz): - self.viz = viz - self.xlate = dnnlib.EasyDict(x=0, y=0, anim=False, round=False, speed=1e-2) - self.xlate_def = dnnlib.EasyDict(self.xlate) - self.rotate = dnnlib.EasyDict(val=0, anim=False, speed=5e-3) - self.rotate_def = dnnlib.EasyDict(self.rotate) - self.opts = dnnlib.EasyDict(untransform=False) - self.opts_def = dnnlib.EasyDict(self.opts) - - @imgui_utils.scoped_by_object_id - def __call__(self, show=True): - viz = self.viz - if show: - imgui.text('Translate') - imgui.same_line(viz.label_w) - with imgui_utils.item_width(viz.font_size * 8): - _changed, (self.xlate.x, self.xlate.y) = imgui.input_float2('##xlate', self.xlate.x, self.xlate.y, format='%.4f') - imgui.same_line(viz.label_w + viz.font_size * 8 + viz.spacing) - _clicked, dragging, dx, dy = imgui_utils.drag_button('Drag fast##xlate', width=viz.button_w) - if dragging: - self.xlate.x += dx / viz.font_size * 2e-2 - self.xlate.y += dy / viz.font_size * 2e-2 - imgui.same_line() - _clicked, dragging, dx, dy = imgui_utils.drag_button('Drag slow##xlate', width=viz.button_w) - if dragging: - self.xlate.x += dx / viz.font_size * 4e-4 - self.xlate.y += dy / viz.font_size * 4e-4 - imgui.same_line() - _clicked, self.xlate.anim = imgui.checkbox('Anim##xlate', self.xlate.anim) - imgui.same_line() - _clicked, self.xlate.round = imgui.checkbox('Round##xlate', self.xlate.round) - imgui.same_line() - with imgui_utils.item_width(-1 - viz.button_w - viz.spacing), imgui_utils.grayed_out(not self.xlate.anim): - changed, speed = imgui.slider_float('##xlate_speed', self.xlate.speed, 0, 0.5, format='Speed %.5f', power=5) - if changed: - self.xlate.speed = speed - imgui.same_line() - if imgui_utils.button('Reset##xlate', width=-1, enabled=(self.xlate != self.xlate_def)): - self.xlate = dnnlib.EasyDict(self.xlate_def) - - if show: - imgui.text('Rotate') - imgui.same_line(viz.label_w) - with imgui_utils.item_width(viz.font_size * 8): - _changed, self.rotate.val = imgui.input_float('##rotate', self.rotate.val, format='%.4f') - imgui.same_line(viz.label_w + viz.font_size * 8 + viz.spacing) - _clicked, dragging, dx, _dy = imgui_utils.drag_button('Drag fast##rotate', width=viz.button_w) - if dragging: - self.rotate.val += dx / viz.font_size * 2e-2 - imgui.same_line() - _clicked, dragging, dx, _dy = imgui_utils.drag_button('Drag slow##rotate', width=viz.button_w) - if dragging: - self.rotate.val += dx / viz.font_size * 4e-4 - imgui.same_line() - _clicked, self.rotate.anim = imgui.checkbox('Anim##rotate', self.rotate.anim) - imgui.same_line() - with imgui_utils.item_width(-1 - viz.button_w - viz.spacing), imgui_utils.grayed_out(not self.rotate.anim): - changed, speed = imgui.slider_float('##rotate_speed', self.rotate.speed, -1, 1, format='Speed %.4f', power=3) - if changed: - self.rotate.speed = speed - imgui.same_line() - if imgui_utils.button('Reset##rotate', width=-1, enabled=(self.rotate != self.rotate_def)): - self.rotate = dnnlib.EasyDict(self.rotate_def) - - if show: - imgui.set_cursor_pos_x(imgui.get_content_region_max()[0] - 1 - viz.button_w*1 - viz.font_size*16) - _clicked, self.opts.untransform = imgui.checkbox('Untransform', self.opts.untransform) - imgui.same_line(imgui.get_content_region_max()[0] - 1 - viz.button_w) - if imgui_utils.button('Reset##opts', width=-1, enabled=(self.opts != self.opts_def)): - self.opts = dnnlib.EasyDict(self.opts_def) - - if self.xlate.anim: - c = np.array([self.xlate.x, self.xlate.y], dtype=np.float64) - t = c.copy() - if np.max(np.abs(t)) < 1e-4: - t += 1 - t *= 0.1 / np.hypot(*t) - t += c[::-1] * [1, -1] - d = t - c - d *= (viz.frame_delta * self.xlate.speed) / np.hypot(*d) - self.xlate.x += d[0] - self.xlate.y += d[1] - - if self.rotate.anim: - self.rotate.val += viz.frame_delta * self.rotate.speed - - pos = np.array([self.xlate.x, self.xlate.y], dtype=np.float64) - if self.xlate.round and 'img_resolution' in viz.result: - pos = np.rint(pos * viz.result.img_resolution) / viz.result.img_resolution - angle = self.rotate.val * np.pi * 2 - - viz.args.input_transform = [ - [np.cos(angle), np.sin(angle), pos[0]], - [-np.sin(angle), np.cos(angle), pos[1]], - [0, 0, 1]] - - viz.args.update(untransform=self.opts.untransform) - -#---------------------------------------------------------------------------- diff --git a/spaces/jamesjohnson763/ClinicalTerminologyUIUX-GR/README.md b/spaces/jamesjohnson763/ClinicalTerminologyUIUX-GR/README.md deleted file mode 100644 index 0d6e0406681afc174649961a9280514d49377dc5..0000000000000000000000000000000000000000 --- a/spaces/jamesjohnson763/ClinicalTerminologyUIUX-GR/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: ClinicalTerminologyUIUX GR -emoji: 🐢 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.8.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jbyun/music-separation/README.md b/spaces/jbyun/music-separation/README.md deleted file mode 100644 index c96ed244ffc331ed8249e0d4b97fb85e8c2467b4..0000000000000000000000000000000000000000 --- a/spaces/jbyun/music-separation/README.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -title: Demucs Music Source Separation (v4) -emoji: ⚡ -colorFrom: red -colorTo: purple -sdk: gradio -app_file: app.py -pinned: true -duplicated_from: abidlabs/music-separation ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. \ No newline at end of file diff --git a/spaces/jcenaa/Segment-Any-RGBD/GETTING_STARTED.md b/spaces/jcenaa/Segment-Any-RGBD/GETTING_STARTED.md deleted file mode 100644 index 847ddda04c47cf234c3593da2504184011e165fc..0000000000000000000000000000000000000000 --- a/spaces/jcenaa/Segment-Any-RGBD/GETTING_STARTED.md +++ /dev/null @@ -1,99 +0,0 @@ -## Getting started with OVSeg - - -### Try demo - -We release our largest model (Swin-Base + CLIP-ViT-L/14) [ovseg_swinbase_vitL14_ft_mpt.pth](https://drive.google.com/file/d/1cn-ohxgXDrDfkzC1QdO-fi8IjbjXmgKy/view?usp=sharing) (md5: 526080). - -- Test on sample image - ```bash - python demo.py --config-file configs/ovseg_swinB_vitL_demo.yaml --class-names 'Oculus' 'Ukulele' --input ./resources/demo_samples/sample_03.jpeg --output ./pred --opts MODEL.WEIGHTS #PATH_of_ovseg_swinbase_vitL14_ft_mpt.pth - ``` - -### Evaluation with pre-trained weights - -We release our largest model (Swin-Base + CLIP-ViT-L/14) [ovseg_swinbase_vitL14_ft_mpt.pth](https://drive.google.com/file/d/1cn-ohxgXDrDfkzC1QdO-fi8IjbjXmgKy/view?usp=sharing) (md5: 526080). - -- Test on ADE20K-150 and ADE-847 - ```bash - python train_net.py --num-gpu 8 --eval-only --config-file configs/ovseg_swinB_vitL_bs32_120k.yaml MODEL.WEIGHTS #PATH_of_ovseg_swinbase_vitL14_ft_mpt.pth DATASETS.TEST \(\"ade20k_sem_seg_val\",\"ade20k_full_sem_seg_val\"\) - ``` - -- Test on PascalContext-59 and PascalContext-459 - ```bash - python train_net.py --num-gpu 8 --eval-only --config-file configs/ovseg_swinB_vitL_bs32_120k.yaml MODEL.WEIGHTS #PATH_of_ovseg_swinbase_vitL14_ft_mpt.pth MODEL.CLIP_ADAPTER.CLIP_ENSEMBLE_WEIGHT 0.6 DATASETS.TEST \(\"pascal_context_59_sem_seg_val\",\"pascal_context_459_sem_seg_val\",\) - ``` - -- Test on PascalVOC-20 - ```bash - python train_net.py --num-gpu 8 --eval-only --config-file configs/ovseg_swinB_vitL_bs32_120k.yaml MODEL.WEIGHTS #PATH_of_ovseg_swinbase_vitL14_ft_mpt.pth MODEL.CLIP_ADAPTER.CLIP_ENSEMBLE_WEIGHT 0.45 DATASETS.TEST \(\"pascalvoc20_sem_seg_val\",\) - ``` - -#### Performance benchmark - -| method | backbone | training dataset | A-847 | PC-459 | A-150 | PC-59 | PAS-20 | -|------------------------------------|----------|------------------|:-----:|:------:|:-----:|:-----:|:------:| -| Open-vocabulary generalist models. | | | | | | | | -| SPNet | R-101 | PASCAL-15 | - | - | - | 24.3 | 18.3 | -| ZS3Net | R-101 | PASCAL-15 | - | - | - | 19.4 | 38.3 | -| LSeg | R-101 | PASCAL-15 | - | - | - | - | 47.4 | -| LSeg+ | R-101 | COCO Panoptic | 2.5 | 5.2 | 13.0 | 36.0 | 59.0 | -| SimBaseline | R-101c | COCO-Stuff-156 | - | - | 15.3 | - | 74.5 | -| ZegFormer | R-50 | COCO-Stuff-156 | - | - | 16.4 | - | 80.7 | -| OpenSeg | R-101 | COCO Panoptic | 4.0 | 6.5 | 15.3 | 36.9 | 60.0 | -| OVSeg (Ours) | R-101c | COCO-Stuff-171 | 7.1 | 11.0 | 24.8 | 53.3 | 92.6 | -| LSeg+ | Eff-B7 | COCO Panoptic | 3.8 | 7.8 | 18.0 | 46.5 | - | -| OpenSeg | Eff-B7 | COCO Panoptic | 6.3 | 9.0 | 21.1 | 42.1 | - | -| OVSeg (Ours) | Swin-B | COCO-Stuff-171 | 9.0 | 12.4 | 29.6 | 55.7 | 94.5 | -| Supervised specialist models. | | | | | | | | -| FCN | FCN-8s | Same as test | - | - | 29.4 | 37.8 | - | -| Deeplab | R-101 | Same as test | - | - | - | 45.7 | 77.7 | -| SelfTrain | Eff-L2 | Same as test | - | - | - | - | 90.0 | - -#### Ablation study - -- Mask prompt tuning can bring significant improvement without changing CLIP weights (Table 3 in [paper](https://arxiv.org/pdf/2210.04150.pdf)) - -Download the checkpoint with mpt only [ovseg_swinbase_vitL14_mpt_only.pt](https://drive.google.com/file/d/1LJGWFjHw76OGDNy9r9KQIaACfIm9KMhQ/view?usp=sharing) (md5: 2dd495). - - ```bash - python train_net.py --num-gpu 8 --eval-only --config-file configs/ovseg_swinB_vitL_bs32_120k.yaml MODEL.WEIGHTS #PATH_of_ovseg_swinbase_vitL14_mpt_only.pt DATASETS.TEST \(\"ade20k_sem_seg_val\",\"ade20k_full_sem_seg_val\"\) - ``` - -- Mask prompt tuning can improve over fully finetuned model (Table 3 in [paper](https://arxiv.org/pdf/2210.04150.pdf)) - -With the same [ovseg_swinbase_vitL14_ft_mpt.pth](https://drive.google.com/file/d/1cn-ohxgXDrDfkzC1QdO-fi8IjbjXmgKy/view?usp=sharing) checkpoint, set `MASK_PROMPT_FWD` as `False` - - ```bash - python train_net.py --num-gpu 8 --eval-only --config-file configs/ovseg_swinB_vitL_bs32_120k.yaml MODEL.CLIP_ADAPTER.MASK_PROMPT_FWD False MODEL.WEIGHTS #PATH_of_ovseg_swinbase_vitL14_ft_mpt.pth DATASETS.TEST \(\"ade20k_sem_seg_val\",\"ade20k_full_sem_seg_val\"\) - ``` - -- The effects of class prediction ensemble (Table 6 in [paper](https://arxiv.org/pdf/2210.04150.pdf)) - -With the same [ovseg_swinbase_vitL14_ft_mpt.pth](https://drive.google.com/file/d/1cn-ohxgXDrDfkzC1QdO-fi8IjbjXmgKy/view?usp=sharing) checkpoint, set `CLIP_ENSEMBLE` as `False`. - - ```bash - python train_net.py --num-gpu 8 --eval-only --config-file configs/ovseg_swinB_vitL_bs32_120k.yaml MODEL.CLIP_ADAPTER.CLIP_ENSEMBLE False MODEL.WEIGHTS #PATH_of_ovseg_swinbase_vitL14_ft_mpt.pth DATASETS.TEST \(\"ade20k_sem_seg_val\",\"ade20k_full_sem_seg_val\"\) - ``` - -### Training Segmentation model - - Our model is trained on COCO-Stuff - -- Training baseline w/ original CLIP - ``` - python train_net.py --num-gpu 8 --config-file configs/ovseg_swinB_vitL_bs32_120k.yaml MODEL.CLIP_ADAPTER.MASK_PROMPT_FWD False - ``` - -To reproduce our final results, you may want to use the our mask-adapted CLIP - -- Training ovseg w/ mask-adapted CLIP - ``` - python train_net.py --num-gpu 8 --config-file configs/ovseg_swinB_vitL_bs32_120k.yaml MODEL.CLIP_ADAPTER.CLIP_MODEL_NAME #PATH_TO_MASKADAPTED_CLIP - ``` - -CAUTION: The final results is sensitive to the ensemble (appendix A.5 in [paper](https://arxiv.org/pdf/2210.04150.pdf)). Thus, you may want to use the ```tools/search_thr_ensemble_w.sh``` to find the best ensemble hyper-parameters. - -### Fine-tuning CLIP with collected mask-category pairs - -We are still working on this part, stay tuned! \ No newline at end of file diff --git a/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/utils/predictor.py b/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/utils/predictor.py deleted file mode 100644 index 59f5744d31f7422389c6994aa6fb01f71b298d21..0000000000000000000000000000000000000000 --- a/spaces/jcenaa/Segment-Any-RGBD/open_vocab_seg/utils/predictor.py +++ /dev/null @@ -1,793 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Copyright (c) Meta Platforms, Inc. All Rights Reserved - -import numpy as np -import torch -import torchvision -import imageio -from tqdm import tqdm -import os -import cv2 - -from pytorch3d.structures import Pointclouds -from pytorch3d.renderer import look_at_view_transform - -from detectron2.data import MetadataCatalog -from detectron2.engine.defaults import DefaultPredictor -from detectron2.utils.visualizer import ColorMode, Visualizer -from detectron2.data.detection_utils import read_image -from segment_anything import sam_model_registry, SamAutomaticMaskGenerator, SamPredictor -import matplotlib.pyplot as plt -import matplotlib as mpl -from .pcd_rendering import unproject_pts_pt, get_coord_grids_pt, create_pcd_renderer - - -class OVSegPredictor(DefaultPredictor): - def __init__(self, cfg): - super().__init__(cfg) - - def __call__(self, original_image, class_names): - """ - Args: - original_image (np.ndarray): an image of shape (H, W, C) (in BGR order). - - Returns: - predictions (dict): - the output of the model for one image only. - See :doc:`/tutorials/models` for details about the format. - """ - with torch.no_grad(): # https://github.com/sphinx-doc/sphinx/issues/4258 - # Apply pre-processing to image. - if self.input_format == "RGB": - # whether the model expects BGR inputs or RGB - original_image = original_image[:, :, ::-1] - height, width = original_image.shape[:2] - image = self.aug.get_transform(original_image).apply_image(original_image) - image = torch.as_tensor(image.astype("float32").transpose(2, 0, 1)) - - inputs = {"image": image, "height": height, "width": width, "class_names": class_names} - predictions = self.model([inputs])[0] - return predictions - -class OVSegVisualizer(Visualizer): - def __init__(self, img_rgb, metadata=None, scale=1.0, instance_mode=ColorMode.IMAGE, class_names=None): - super().__init__(img_rgb, metadata, scale, instance_mode) - self.class_names = class_names - - def draw_sem_seg(self, sem_seg, area_threshold=None, alpha=0.8): - """ - Draw semantic segmentation predictions/labels. - - Args: - sem_seg (Tensor or ndarray): the segmentation of shape (H, W). - Each value is the integer label of the pixel. - area_threshold (int): segments with less than `area_threshold` are not drawn. - alpha (float): the larger it is, the more opaque the segmentations are. - - Returns: - output (VisImage): image object with visualizations. - """ - if isinstance(sem_seg, torch.Tensor): - sem_seg = sem_seg.numpy() - labels, areas = np.unique(sem_seg, return_counts=True) - sorted_idxs = np.argsort(-areas).tolist() - labels = labels[sorted_idxs] - class_names = self.class_names if self.class_names is not None else self.metadata.stuff_classes - - for label in filter(lambda l: l < len(class_names), labels): - try: - mask_color = [x / 255 for x in self.metadata.stuff_colors[label]] - except (AttributeError, IndexError): - mask_color = None - mask_color = np.random.random((1, 3)).tolist()[0] - - binary_mask = (sem_seg == label).astype(np.uint8) - text = class_names[label] - self.draw_binary_mask( - binary_mask, - color=mask_color, - edge_color=(1.0, 1.0, 240.0 / 255), - text=text, - alpha=alpha, - area_threshold=area_threshold, - ) - return self.output - - def draw_sam_seg(self, masks, area_threshold=None, alpha=0.5): - """ - Draw semantic segmentation predictions/labels. - - Args: - sem_seg (Tensor or ndarray): the segmentation of shape (H, W). - Each value is the integer label of the pixel. - area_threshold (int): segments with less than `area_threshold` are not drawn. - alpha (float): the larger it is, the more opaque the segmentations are. - - Returns: - output (VisImage): image object with visualizations. - """ - plt.figure() - if len(masks) == 0: - return - sorted_anns = sorted(masks, key=(lambda x: x['area']), reverse=True) - img = np.ones((sorted_anns[0]['segmentation'].shape[0], sorted_anns[0]['segmentation'].shape[1], 3)) - class_names = self.class_names if self.class_names is not None else self.metadata.stuff_classes - for ann in sorted_anns: - m = ann['segmentation'] - mask_color = np.random.random((1, 3)).tolist()[0] - - self.draw_binary_mask( - m, - color=mask_color, - edge_color=(1.0, 1.0, 240.0 / 255), - text=class_names[ann['class']], - alpha=alpha, - area_threshold=area_threshold, - ) - return self.output - - - -class VisualizationDemo(object): - def __init__(self, cfg, instance_mode=ColorMode.IMAGE, parallel=False): - """ - Args: - cfg (CfgNode): - instance_mode (ColorMode): - parallel (bool): whether to run the model in different processes from visualization. - Useful since the visualization logic can be slow. - """ - self.metadata = MetadataCatalog.get( - cfg.DATASETS.TEST[0] if len(cfg.DATASETS.TEST) else "__unused" - ) - - self.cpu_device = torch.device("cpu") - self.instance_mode = instance_mode - - self.parallel = parallel - if parallel: - raise NotImplementedError - else: - self.predictor = OVSegPredictor(cfg) - - def run_on_image(self, image, class_names): - """ - Args: - image (np.ndarray): an image of shape (H, W, C) (in BGR order). - This is the format used by OpenCV. - Returns: - predictions (dict): the output of the model. - vis_output (VisImage): the visualized image output. - """ - predictions = self.predictor(image, class_names) - # Convert image from OpenCV BGR format to Matplotlib RGB format. - image = image[:, :, ::-1] - visualizer = OVSegVisualizer(image, self.metadata, instance_mode=self.instance_mode, class_names=class_names) - # if "sem_seg" in predictions: - # r = predictions["sem_seg"] - # blank_area = (r[0] == 0) - # pred_mask = r.argmax(dim=0).to('cpu') - # pred_mask[blank_area] = 255 - # pred_mask = np.array(pred_mask, dtype=np.int) - - # vis_output = visualizer.draw_sem_seg( - # pred_mask - # ) - # else: - # raise NotImplementedError - - if "sem_seg" in predictions: - r = predictions["sem_seg"] - pred_mask = r.argmax(dim=0).to('cpu') - pred_mask = np.array(pred_mask, dtype=int) - - vis_output = visualizer.draw_sem_seg( - pred_mask - ) - else: - raise NotImplementedError - - return predictions, vis_output - - def run_on_image_sam(self, path, class_names, depth_map_path, rage_matrices_path): - """ - Args: - path (str): the path of the image - Returns: - predictions (dict): the output of the model. - vis_output (VisImage): the visualized image output. - """ - image = read_image(path, format="BGR") - predictions = self.predictor(image, class_names) - # Convert image from OpenCV BGR format to Matplotlib RGB format. - image = image[:, :, ::-1] - visualizer_rgb = OVSegVisualizer(image, self.metadata, instance_mode=self.instance_mode, class_names=class_names) - visualizer_depth = OVSegVisualizer(image, self.metadata, instance_mode=self.instance_mode, class_names=class_names) - visualizer_rgb_sam = OVSegVisualizer(image, self.metadata, instance_mode=self.instance_mode, class_names=class_names) - visualizer_depth_sam = OVSegVisualizer(image, self.metadata, instance_mode=self.instance_mode, class_names=class_names) - - sam_checkpoint = "sam_vit_h_4b8939.pth" - model_type = "vit_h" - device = "cuda" - sam = sam_model_registry[model_type](checkpoint=sam_checkpoint) - sam.to(device=device) - - mask_generator_2 = SamAutomaticMaskGenerator( - model=sam, - points_per_side=64, - pred_iou_thresh=0.8, - stability_score_thresh=0.8, - crop_n_layers=0, - crop_n_points_downscale_factor=0, - min_mask_region_area=100, # Requires open-cv to run post-processing - ) - print('Using SAM to generate segments for the RGB image') - masks_rgb = mask_generator_2.generate(image) - masks_rgb = sorted(masks_rgb, key=(lambda x: x['area']), reverse=True) - - print('Using SAM to generate segments for the Depth map') - d, world_coord = self.project_2d_to_3d(depth_map_path, rage_matrices_path) - d = (d - np.min(d)) / (np.max(d) - np.min(d)) - image_depth = mpl.colormaps['plasma'](d)*255 - plt.figure() - plt.imshow(image_depth.astype(np.uint8)) - plt.axis('off') - plt.savefig('outputs/Depth_rendered.png', bbox_inches='tight', pad_inches=0.0) - masks_depth = mask_generator_2.generate(image_depth.astype(np.uint8)[:,:,:-1]) - masks_depth = sorted(masks_depth, key=(lambda x: x['area']), reverse=True) - - if "sem_seg" in predictions: - r = predictions["sem_seg"] - pred_mask = r.argmax(dim=0).to('cpu') - pred_mask = np.array(pred_mask, dtype=int) - - pred_mask_sam_rgb = pred_mask.copy() - for mask in masks_rgb: - cls_tmp, cls_num = np.unique(pred_mask[mask['segmentation']], return_counts=True) - pred_mask_sam_rgb[mask['segmentation']] = cls_tmp[np.argmax(cls_num)] - mask['class'] = cls_tmp[np.argmax(cls_num)] - - vis_output_rgb = visualizer_rgb.draw_sem_seg( - pred_mask_sam_rgb - ) - # vis_output_rgb = visualizer_rgb.draw_sem_seg( - # pred_mask, alpha=1 - # ) - - pred_mask_sam_depth = pred_mask.copy() - for mask in masks_depth: - cls_tmp, cls_num = np.unique(pred_mask[mask['segmentation']], return_counts=True) - pred_mask_sam_depth[mask['segmentation']] = cls_tmp[np.argmax(cls_num)] - mask['class'] = cls_tmp[np.argmax(cls_num)] - - vis_output_depth = visualizer_depth.draw_sem_seg( - pred_mask_sam_depth - ) - - vis_output_rgb_sam = visualizer_rgb_sam.draw_sam_seg(masks_rgb) - vis_output_depth_sam = visualizer_depth_sam.draw_sam_seg(masks_depth) - - else: - raise NotImplementedError - - return predictions, vis_output_rgb, vis_output_depth, vis_output_rgb_sam, vis_output_depth_sam - - def project_2d_to_3d(self, depth_map_path, rage_matrices_path): - - H = 800 - W = 1280 - IMAGE_SIZE = (H, W) - - def pixels_to_ndcs(xx, yy, size=IMAGE_SIZE): - s_y, s_x = size - s_x -= 1 # so 1 is being mapped into (n-1)th pixel - s_y -= 1 # so 1 is being mapped into (n-1)th pixel - x = (2 / s_x) * xx - 1 - y = (-2 / s_y) * yy + 1 - return x, y - - rage_matrices = np.load(rage_matrices_path) - - - # get the (ViewProj) matrix that transform points from the world coordinate to NDC - # (points in world coordinate) @ VP = (points in NDC) - VP = rage_matrices['VP'] - VP_inverse = rage_matrices['VP_inv'] # NDC to world coordinate - - # get the (Proj) matrix that transform points from the camera coordinate to NDC - # (points in camera coordinate) @ P = (points in NDC) - P = rage_matrices['P'] - P_inverse = rage_matrices['P_inv'] # NDC to camera coordinate - # print(VP, VP_inverse, P, P_inverse) - - d = np.load(depth_map_path) - d = d/6.0 - 4e-5 # convert to NDC coordinate - - px = np.arange(0, W) - py = np.arange(0, H) - px, py = np.meshgrid(px, py, sparse=False) - px = px.reshape(-1) - py = py.reshape(-1) - - ndcz = d[py, px] # get the depth in NDC - ndcx, ndcy = pixels_to_ndcs(px, py) - ndc_coord = np.stack([ndcx, ndcy, ndcz, np.ones_like(ndcz)], axis=1) - - camera_coord = ndc_coord @ P_inverse - camera_coord = camera_coord/camera_coord[:,-1:] - - world_coord = ndc_coord @ VP_inverse - world_coord = world_coord/world_coord[:,-1:] - - return d, world_coord - - def get_xyzrgb(self, rgb_path, depth_path, rage_matrices_path): - - H = 800 - W = 1280 - IMAGE_SIZE = (H, W) - - def pixels_to_ndcs(xx, yy, size=IMAGE_SIZE): - s_y, s_x = size - s_x -= 1 # so 1 is being mapped into (n-1)th pixel - s_y -= 1 # so 1 is being mapped into (n-1)th pixel - x = (2 / s_x) * xx - 1 - y = (-2 / s_y) * yy + 1 - return x, y - - rage_matrices = np.load(rage_matrices_path) - - - # get the (ViewProj) matrix that transform points from the world coordinate to NDC - # (points in world coordinate) @ VP = (points in NDC) - VP = rage_matrices['VP'] - VP_inverse = rage_matrices['VP_inv'] # NDC to world coordinate - - # get the (Proj) matrix that transform points from the camera coordinate to NDC - # (points in camera coordinate) @ P = (points in NDC) - P = rage_matrices['P'] - P_inverse = rage_matrices['P_inv'] # NDC to camera coordinate - # print(VP, VP_inverse, P, P_inverse) - - d = np.load(depth_path) - d = d/6.0 - 4e-5 # convert to NDC coordinate - - px = np.arange(0, W) - py = np.arange(0, H) - px, py = np.meshgrid(px, py, sparse=False) - px = px.reshape(-1) - py = py.reshape(-1) - - ndcz = d[py, px] # get the depth in NDC - ndcx, ndcy = pixels_to_ndcs(px, py) - ndc_coord = np.stack([ndcx, ndcy, ndcz, np.ones_like(ndcz)], axis=1) - - camera_coord = ndc_coord @ P_inverse - camera_coord = camera_coord/camera_coord[:,-1:] - - world_coord = ndc_coord @ VP_inverse - world_coord = world_coord/world_coord[:,-1:] - - rgb = read_image(rgb_path, format="BGR") - rgb = rgb[:, :, ::-1] - rgb = rgb[py, px, :] - - xyzrgb = np.concatenate((world_coord[:,:-1], rgb), axis=1) - - return xyzrgb - - def render_3d_video(self, xyzrgb_path, depth_path): - - device = torch.device('cuda:0') if torch.cuda.is_available() else torch.device('cpu') - - xyzrgb = np.load(xyzrgb_path) - depth = np.load(depth_path) - depth = torch.tensor(depth).to(device) - depth = 1 / depth - - H = 800 - W = 1280 - radius = 1.5 / min(H, W) * 2.0 - intrinsic = np.array([[max(H, W), 0, W // 2], - [0, max(H, W), H // 2], - [0, 0, 1]]) - - intrinsic = torch.from_numpy(intrinsic).float()[None].to(device) - coord = get_coord_grids_pt(H, W, device=device).float()[None] - pts = unproject_pts_pt(intrinsic, coord.reshape(-1, 2), depth) - pts[:, 0] = ((pts[:, 0] - pts[:, 0].min()) / (pts[:, 0].max() - pts[:, 0].min()) - 0.5) * 2 - pts[:, 1] = ((pts[:, 1] - pts[:, 1].min()) / (pts[:, 1].max() - pts[:, 1].min()) - 0.7) * 2 - pts[:, 2] = ((pts[:, 2] - pts[:, 2].min()) / (pts[:, 2].max() - pts[:, 2].min()) - 0.5) * 2 - - num_frames = 45 - degrees = np.linspace(120, 220, num_frames) - - total = ['rgb_3d_sam', 'depth_3d_sam', 'rgb_3d_sam_mask', 'depth_3d_sam_mask'] - frames_all = {} - - for j, name in enumerate(total): - img = torch.from_numpy(xyzrgb[name][:, 3:] / 255.).to(device).float() - pcd = Pointclouds(points=[pts], features=[img.squeeze().reshape(-1, 3)]) - frames = [] - for i in tqdm(range(num_frames)): - R, t = look_at_view_transform(3., -10, degrees[i]) - renderer = create_pcd_renderer(H, W, intrinsic.squeeze()[:3, :3], - R=R, T=t, - radius=radius, device=device) - result = renderer(pcd) - result = result.permute(0, 3, 1, 2) - frame = (255. * result.detach().cpu().squeeze().permute(1, 2, 0).numpy()).astype(np.uint8) - frames.append(frame) - - frames_all[name] = frames - - # video_out_file = '{}.gif'.format(name) - # imageio.mimwrite(os.path.join('outputs', video_out_file), frames, fps=25) - - video_out_file = '{}.mp4'.format(name) - imageio.mimwrite(os.path.join('outputs', video_out_file), frames, fps=25, quality=8) - - video_out_file = '{}.mp4'.format('RGB_3D_All') - imageio.mimwrite(os.path.join('outputs', video_out_file), frames_all['rgb_3d_sam_mask']+frames_all['rgb_3d_sam'], fps=25, quality=8) - - video_out_file = '{}.mp4'.format('Depth_3D_All') - imageio.mimwrite(os.path.join('outputs', video_out_file), frames_all['depth_3d_sam_mask']+frames_all['depth_3d_sam'], fps=25, quality=8) - -class VisualizationDemoIndoor(VisualizationDemo): - def __init__(self, cfg, instance_mode=ColorMode.IMAGE, parallel=False): - super().__init__(cfg, instance_mode, parallel) - - def build_pcd(self, depth_mask, coords, colors, masks, sem_map): - group_ids = np.full(masks[0]["segmentation"].shape, -1, dtype=int) - num_masks = len(masks) - group_counter = 0 - for i in reversed(range(num_masks)): - # print(masks[i]["predicted_iou"]) - group_ids[masks[i]["segmentation"]] = group_counter - group_counter += 1 - group_ids = np.unique(group_ids[depth_mask], return_inverse=True)[1] - return dict(coord=coords, color=colors, group=group_ids, sem_map=sem_map) - - - def run_on_pcd_ui(self, rgb_path, depth_path, class_names): - depth = depth_path - color = rgb_path - #semantic_map = join(rgb_path, scene_name, 'semantic_label', color_name[0:-4] + '.pth') - - depth_img = cv2.imread(depth, -1) # read 16bit grayscale image - depth_mask = (depth_img != 0) - color_image = cv2.imread(color) - color_image = cv2.resize(color_image, (640, 480)) - predictions = self.predictor(color_image, class_names) - # Convert image from OpenCV BGR format to Matplotlib RGB format. - image = color_image[:, :, ::-1] - visualizer_rgb = OVSegVisualizer(image, self.metadata, instance_mode=self.instance_mode, class_names=class_names) - visualizer_depth = OVSegVisualizer(image, self.metadata, instance_mode=self.instance_mode, class_names=class_names) - visualizer_rgb_sam = OVSegVisualizer(image, self.metadata, instance_mode=self.instance_mode, class_names=class_names) - visualizer_depth_sam = OVSegVisualizer(image, self.metadata, instance_mode=self.instance_mode, class_names=class_names) - - sam_checkpoint = "sam_vit_h_4b8939.pth" - model_type = "vit_h" - device = "cuda" - sam = sam_model_registry[model_type](checkpoint=sam_checkpoint) - sam.to(device=device) - - mask_generator_2 = SamAutomaticMaskGenerator( - model=sam, - points_per_side=64, - pred_iou_thresh=0.5, - stability_score_thresh=0.8, - crop_n_layers=0, - crop_n_points_downscale_factor=0, - min_mask_region_area=100, # Requires open-cv to run post-processing - ) - print('Using SAM to generate segments for the RGB image') - masks_rgb = mask_generator_2.generate(image) - masks_rgb = sorted(masks_rgb, key=(lambda x: x['area']), reverse=True) - - print('Using SAM to generate segments for the Depth map') - d = np.full(depth_img.shape, 0, dtype=float) - d[depth_mask] = (1 / (depth_img+1e-6))[depth_mask] - colored_depth = (d - np.min(d)) / (np.max(d) - np.min(d)) - colored_depth = mpl.colormaps['inferno'](colored_depth)*255 - plt.figure() - plt.imshow(colored_depth.astype(np.uint8)[:,:,:-1]) - plt.axis('off') - plt.savefig('outputs/Depth_rendered.png') - masks_depth = mask_generator_2.generate(colored_depth.astype(np.uint8)[:,:,:-1]) - masks_depth = sorted(masks_depth, key=(lambda x: x['area']), reverse=True) - - if "sem_seg" in predictions: - r = predictions["sem_seg"] - pred_mask = r.argmax(dim=0).to('cpu') - pred_mask = np.array(pred_mask, dtype=int) - - output2D = {} - pred_mask_sam_depth = np.full(pred_mask.shape, -1) - masks_depth = sorted(masks_depth, key=(lambda x: x['area']), reverse=False) - for mask in masks_depth: - to_paint = pred_mask_sam_depth == -1 - cls_tmp, cls_num = np.unique(pred_mask[mask['segmentation']], return_counts=True) - #print(cls_tmp, cls_num) - pred_mask_sam_depth[mask['segmentation'] & to_paint] = cls_tmp[np.argmax(cls_num)] - #print(class_names[cls_tmp[np.argmax(cls_num)]]) - mask['class'] = cls_tmp[np.argmax(cls_num)] - - output2D['sem_seg_on_depth'] = visualizer_depth.draw_sem_seg( - pred_mask_sam_depth - ) - - pred_mask_sam_rgb = pred_mask.copy() - for mask in masks_rgb: - cls_tmp, cls_num = np.unique(pred_mask[mask['segmentation']], return_counts=True) - #print(mask['segmentation'].sum(), cls_tmp, cls_num) - pred_mask_sam_rgb[mask['segmentation']] = cls_tmp[np.argmax(cls_num)] - mask['class'] = cls_tmp[np.argmax(cls_num)] - - output2D['sem_seg_on_rgb'] = visualizer_rgb.draw_sem_seg( - pred_mask_sam_rgb - ) - - output2D['sam_seg_on_rgb'] = visualizer_rgb_sam.draw_sam_seg(masks_rgb) - output2D['sam_seg_on_depth'] = visualizer_depth_sam.draw_sam_seg(masks_depth) - - else: - raise NotImplementedError - - color_image = np.reshape(color_image[depth_mask], [-1,3]) - #group_ids = group_ids[depth_mask] - - sem_map_color = pred_mask_sam_rgb[depth_mask] - sem_map_depth = pred_mask_sam_depth[depth_mask] - - colors = np.zeros_like(color_image) - colors[:,0] = color_image[:,2] - colors[:,1] = color_image[:,1] - colors[:,2] = color_image[:,0] - - depth_shift = 1000.0 - x,y = np.meshgrid(np.linspace(0,depth_img.shape[1]-1,depth_img.shape[1]), np.linspace(0,depth_img.shape[0]-1,depth_img.shape[0])) - uv_depth = np.zeros((depth_img.shape[0], depth_img.shape[1], 3)) - uv_depth[:,:,0] = x - uv_depth[:,:,1] = y - uv_depth[:,:,2] = depth_img/depth_shift - - output3D = {} - output3D['rgb_3d_sem'] = np.stack((uv_depth, output2D['sem_seg_on_rgb'].get_image()), axis=2).reshape((depth_img.shape[0], depth_img.shape[1], 6)) - output3D['depth_3d_sem'] = np.stack((uv_depth, output2D['sem_seg_on_depth'].get_image()), axis=2).reshape((depth_img.shape[0], depth_img.shape[1], 6)) - output3D['rgb_3d_sam'] = np.stack((uv_depth, output2D['sam_seg_on_rgb'].get_image()), axis=2).reshape((depth_img.shape[0], depth_img.shape[1], 6)) - output3D['depth_3d_sam'] = np.stack((uv_depth, output2D['sam_seg_on_depth'].get_image()), axis=2).reshape((depth_img.shape[0], depth_img.shape[1], 6)) - - return predictions, output2D, output3D - - def run_on_pcd(self, rgb_path, scene_name, color_name, class_names): - intrinsic_path = os.path.join(rgb_path, scene_name, 'intrinsics', 'intrinsic_depth.txt') - depth_intrinsic = np.loadtxt(intrinsic_path) - - pose = os.path.join(rgb_path, scene_name, 'pose', color_name[0:-4] + '.txt') - depth = os.path.join(rgb_path, scene_name, 'depth', color_name[0:-4] + '.png') - color = os.path.join(rgb_path, scene_name, 'color', color_name) - #semantic_map = join(rgb_path, scene_name, 'semantic_label', color_name[0:-4] + '.pth') - - depth_img = cv2.imread(depth, -1) # read 16bit grayscale image - depth_mask = (depth_img != 0) - color_image = cv2.imread(color) - color_image = cv2.resize(color_image, (640, 480)) - predictions = self.predictor(color_image, class_names) - # Convert image from OpenCV BGR format to Matplotlib RGB format. - image = color_image[:, :, ::-1] - visualizer_rgb = OVSegVisualizer(image, self.metadata, instance_mode=self.instance_mode, class_names=class_names) - visualizer_depth = OVSegVisualizer(image, self.metadata, instance_mode=self.instance_mode, class_names=class_names) - visualizer_rgb_sam = OVSegVisualizer(image, self.metadata, instance_mode=self.instance_mode, class_names=class_names) - visualizer_depth_sam = OVSegVisualizer(image, self.metadata, instance_mode=self.instance_mode, class_names=class_names) - - sam_checkpoint = "sam_vit_h_4b8939.pth" - model_type = "vit_h" - device = "cuda" - sam = sam_model_registry[model_type](checkpoint=sam_checkpoint) - sam.to(device=device) - - mask_generator_2 = SamAutomaticMaskGenerator( - model=sam, - points_per_side=64, - pred_iou_thresh=0.5, - stability_score_thresh=0.8, - crop_n_layers=0, - crop_n_points_downscale_factor=0, - min_mask_region_area=100, # Requires open-cv to run post-processing - ) - print('Using SAM to generate segments for the RGB image') - masks_rgb = mask_generator_2.generate(image) - masks_rgb = sorted(masks_rgb, key=(lambda x: x['area']), reverse=True) - - print('Using SAM to generate segments for the Depth map') - d = np.full(depth_img.shape, 0, dtype=float) - d[depth_mask] = (1 / (depth_img+1e-6))[depth_mask] - colored_depth = (d - np.min(d)) / (np.max(d) - np.min(d)) - colored_depth = mpl.colormaps['inferno'](colored_depth)*255 - plt.figure() - plt.imshow(colored_depth.astype(np.uint8)[:,:,:-1]) - plt.axis('off') - plt.savefig('outputs/Depth_rendered.png') - masks_depth = mask_generator_2.generate(colored_depth.astype(np.uint8)[:,:,:-1]) - masks_depth = sorted(masks_depth, key=(lambda x: x['area']), reverse=True) - - if "sem_seg" in predictions: - r = predictions["sem_seg"] - pred_mask = r.argmax(dim=0).to('cpu') - pred_mask = np.array(pred_mask, dtype=int) - - output2D = {} - pred_mask_sam_depth = np.full(pred_mask.shape, -1) - masks_depth = sorted(masks_depth, key=(lambda x: x['area']), reverse=False) - for mask in masks_depth: - to_paint = pred_mask_sam_depth == -1 - cls_tmp, cls_num = np.unique(pred_mask[mask['segmentation']], return_counts=True) - #print(cls_tmp, cls_num) - pred_mask_sam_depth[mask['segmentation'] & to_paint] = cls_tmp[np.argmax(cls_num)] - #print(class_names[cls_tmp[np.argmax(cls_num)]]) - mask['class'] = cls_tmp[np.argmax(cls_num)] - - output2D['sem_seg_on_depth'] = visualizer_depth.draw_sem_seg( - pred_mask_sam_depth - ) - - pred_mask_sam_rgb = pred_mask.copy() - for mask in masks_rgb: - cls_tmp, cls_num = np.unique(pred_mask[mask['segmentation']], return_counts=True) - #print(mask['segmentation'].sum(), cls_tmp, cls_num) - pred_mask_sam_rgb[mask['segmentation']] = cls_tmp[np.argmax(cls_num)] - mask['class'] = cls_tmp[np.argmax(cls_num)] - - output2D['sem_seg_on_rgb'] = visualizer_rgb.draw_sem_seg( - pred_mask_sam_rgb - ) - - output2D['sam_seg_on_rgb'] = visualizer_rgb_sam.draw_sam_seg(masks_rgb) - output2D['sam_seg_on_depth'] = visualizer_depth_sam.draw_sam_seg(masks_depth) - - else: - raise NotImplementedError - - color_image = np.reshape(color_image[depth_mask], [-1,3]) - #group_ids = group_ids[depth_mask] - - sem_map_color = pred_mask_sam_rgb[depth_mask] - sem_map_depth = pred_mask_sam_depth[depth_mask] - - colors = np.zeros_like(color_image) - colors[:,0] = color_image[:,2] - colors[:,1] = color_image[:,1] - colors[:,2] = color_image[:,0] - - pose = np.loadtxt(pose) - - depth_shift = 1000.0 - x,y = np.meshgrid(np.linspace(0,depth_img.shape[1]-1,depth_img.shape[1]), np.linspace(0,depth_img.shape[0]-1,depth_img.shape[0])) - uv_depth = np.zeros((depth_img.shape[0], depth_img.shape[1], 3)) - uv_depth[:,:,0] = x - uv_depth[:,:,1] = y - uv_depth[:,:,2] = depth_img/depth_shift - - output3D = {} - output3D['rgb_3d_sem'] = np.stack((uv_depth, output2D['sem_seg_on_rgb'].get_image()), axis=2).reshape((depth_img.shape[0], depth_img.shape[1], 6)) - output3D['depth_3d_sem'] = np.stack((uv_depth, output2D['sem_seg_on_depth'].get_image()), axis=2).reshape((depth_img.shape[0], depth_img.shape[1], 6)) - output3D['rgb_3d_sam'] = np.stack((uv_depth, output2D['sam_seg_on_rgb'].get_image()), axis=2).reshape((depth_img.shape[0], depth_img.shape[1], 6)) - output3D['depth_3d_sam'] = np.stack((uv_depth, output2D['sam_seg_on_depth'].get_image()), axis=2).reshape((depth_img.shape[0], depth_img.shape[1], 6)) - - uv_depth = np.reshape(uv_depth, [-1,3]) - uv_depth = uv_depth[np.where(uv_depth[:,2]!=0),:].squeeze() - - intrinsic_inv = np.linalg.inv(depth_intrinsic) - fx = depth_intrinsic[0,0] - fy = depth_intrinsic[1,1] - cx = depth_intrinsic[0,2] - cy = depth_intrinsic[1,2] - bx = depth_intrinsic[0,3] - by = depth_intrinsic[1,3] - n = uv_depth.shape[0] - points = np.ones((n,4)) - X = (uv_depth[:,0]-cx)*uv_depth[:,2]/fx + bx - Y = (uv_depth[:,1]-cy)*uv_depth[:,2]/fy + by - points[:,0] = X - points[:,1] = Y - points[:,2] = uv_depth[:,2] - points_world = np.dot(points, np.transpose(pose)) - - output3D['pcd_color'] = self.build_pcd(depth_mask, coords=points_world[:,:3], colors=colors, masks=masks_rgb, sem_map=sem_map_color) - output3D['pcd_depth'] = self.build_pcd(depth_mask, coords=points_world[:,:3], colors=colors, masks=masks_depth, sem_map=sem_map_depth) - - return predictions, output2D, output3D - - - def merge_pcd(self, pcd_list, data_path, save_path, scene_path, voxel_size, th): - while len(pcd_list) != 1: - print(len(pcd_list), flush=True) - new_pcd_list = [] - for indice in pairwise_indices(len(pcd_list)): - # print(indice) - pcd_frame = cal_2_scenes(pcd_list, indice, voxel_size=voxel_size, voxelize=voxelize) - if pcd_frame is not None: - new_pcd_list.append(pcd_frame) - pcd_list = new_pcd_list - seg_dict = pcd_list[0] - seg_dict["group"] = num_to_natural(remove_small_group(seg_dict["group"], th)) - - data_dict = torch.load(scene_path) - scene_coord = torch.tensor(data_dict["coord"]).cuda().contiguous() - new_offset = torch.tensor(scene_coord.shape[0]).cuda() - gen_coord = torch.tensor(seg_dict["coord"]).cuda().contiguous().float() - offset = torch.tensor(gen_coord.shape[0]).cuda() - gen_group = seg_dict["group"] - gen_sem = seg_dict['sem_map'] - indices, dis = pointops.knn_query(1, gen_coord, offset, scene_coord, new_offset) - indices = indices.cpu().numpy() - sem_map = gen_sem[indices.reshape(-1)].astype(np.int16) - group = gen_group[indices.reshape(-1)].astype(np.int16) - mask_dis = dis.reshape(-1).cpu().numpy() > 0.6 - group[mask_dis] = -1 - sem_map[mask_dis] = -1 - group = group.astype(np.int16) - sem_map = sem_map.astype(np.int16) - torch.save((sem_map, num_to_natural(group)), os.path.join(save_path, scene_name + ".pth")) - - def render_3d_video(self, xyzrgb_path): - xyzrgb = np.load(xyzrgb_path) - device = torch.device('cuda:0') if torch.cuda.is_available() else torch.device('cpu') - - depth = xyzrgb['rgb_3d_sam'][:, :, 2] - depth = torch.tensor(depth).to(device).float() - - num_frames = [60, 60, 60, 90] - - h = 480 - w = 640 - - intrinsic = np.array([[max(h, w), 0, w // 2], - [0, max(h, w), h // 2], - [0, 0, 1]]) - intrinsic = torch.from_numpy(intrinsic).float()[None].to(device) - - coord = get_coord_grids_pt(h, w, device=device).float()[None] - pts = unproject_pts_pt(intrinsic, coord.reshape(-1, 2), depth) - pts[:, 0] = ((pts[:, 0] - pts[:, 0].min()) / (pts[:, 0].max() - pts[:, 0].min()) - 0.5) * 2 - pts[:, 1] = ((pts[:, 1] - pts[:, 1].min()) / (pts[:, 1].max() - pts[:, 1].min()) - 0.5) * 2 - # pts[:, 1] = ((pts[:, 1] - pts[:, 1].min()) / (pts[:, 1].max() - pts[:, 1].min()) - 0.7) * 2 - pts[:, 2] = ((pts[:, 2] - pts[:, 2].min()) / (pts[:, 2].max() - pts[:, 2].min()) - 0.5) * 2 - - radius = 1.5 / min(h, w) * 2.0 - - - total = ['rgb_3d_sam', 'depth_3d_sam', 'rgb_3d_sam_mask', 'depth_3d_sam_mask'] - num_frames = 45 - degrees = np.linspace(120, 220, num_frames) - frames_all = {} - for j, name in enumerate(total): - img = torch.from_numpy(xyzrgb[name][:, :, 3:] / 255.).to(device).float() - pcd = Pointclouds(points=[pts], features=[img.squeeze().reshape(-1, 3)]) - time_steps = np.linspace(0, 1, num_frames) - frames = [] - for i, t_step in tqdm(enumerate(time_steps), total=len(time_steps)): - R, t = look_at_view_transform(3., -10, degrees[i]) - renderer = create_pcd_renderer(h, w, intrinsic.squeeze()[:3, :3], - R=R, T=t, - radius=radius, device=device) - - result = renderer(pcd) - result = result.permute(0, 3, 1, 2) - frame = (255. * result.detach().cpu().squeeze().permute(1, 2, 0).numpy()).astype(np.uint8) - frames.append(frame) - - frames_all[name] = frames - - # video_out_file = '{}.mp4'.format(name) - # imageio.mimwrite(os.path.join('outputs', video_out_file), frames, fps=25) - - video_out_file = '{}.mp4'.format(name) - imageio.mimwrite(os.path.join('outputs', video_out_file), frames, fps=25, quality=8) - - video_out_file = '{}.mp4'.format('RGB_3D_All') - imageio.mimwrite(os.path.join('outputs', video_out_file), frames_all['rgb_3d_sam_mask']+frames_all['rgb_3d_sam'], fps=25, quality=8) - - video_out_file = '{}.mp4'.format('Depth_3D_All') - imageio.mimwrite(os.path.join('outputs', video_out_file), frames_all['depth_3d_sam_mask']+frames_all['depth_3d_sam'], fps=25, quality=8) diff --git a/spaces/jcenaa/Segment-Any-RGBD/third_party/CLIP/setup.py b/spaces/jcenaa/Segment-Any-RGBD/third_party/CLIP/setup.py deleted file mode 100644 index 1026ae8a1c4d99f7107cd2eaffb0b391e87a121f..0000000000000000000000000000000000000000 --- a/spaces/jcenaa/Segment-Any-RGBD/third_party/CLIP/setup.py +++ /dev/null @@ -1,21 +0,0 @@ -import os - -import pkg_resources -from setuptools import setup, find_packages - -setup( - name="clip", - py_modules=["clip"], - version="1.0", - description="", - author="OpenAI", - packages=find_packages(exclude=["tests*"]), - install_requires=[ - str(r) - for r in pkg_resources.parse_requirements( - open(os.path.join(os.path.dirname(__file__), "requirements.txt")) - ) - ], - include_package_data=True, - extras_require={"dev": ["pytest"]}, -) diff --git a/spaces/jcenaa/Segment-Any-RGBD/tools/web_demo.py b/spaces/jcenaa/Segment-Any-RGBD/tools/web_demo.py deleted file mode 100644 index 027b8ca4d656e3d94379c014ae505d2dc57c9225..0000000000000000000000000000000000000000 --- a/spaces/jcenaa/Segment-Any-RGBD/tools/web_demo.py +++ /dev/null @@ -1,76 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Copyright (c) Meta Platforms, Inc. All Rights Reserved - -import multiprocessing as mp - -import numpy as np -from PIL import Image - -from detectron2.config import get_cfg - -from detectron2.projects.deeplab import add_deeplab_config -from detectron2.data.detection_utils import read_image -from open_vocab_seg import add_ovseg_config -from open_vocab_seg.utils import VisualizationDemo - -import gradio as gr - -def setup_cfg(config_file): - # load config from file and command-line arguments - cfg = get_cfg() - add_deeplab_config(cfg) - add_ovseg_config(cfg) - cfg.merge_from_file(config_file) - cfg.freeze() - return cfg - - -def inference(class_names, input_img): - mp.set_start_method("spawn", force=True) - config_file = './configs/ovseg_swinB_vitL_demo.yaml' - cfg = setup_cfg(config_file) - - demo = VisualizationDemo(cfg) - - class_names = class_names.split(',') - img = read_image(input_img, format="BGR") - _, visualized_output = demo.run_on_image(img, class_names) - - return Image.fromarray(np.uint8(visualized_output.get_image())).convert('RGB') - -# demo = gr.Interface(fn=greet, inputs="text", outputs="text") -# demo.launch() - - -examples = [['Oculus, Ukulele', './resources/demo_samples/sample_03.jpeg'],] -output_labels = ['segmentation map'] - -title = 'OVSeg' - -description = """ -Gradio Demo for Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP \n -You may click on of the examples or upload your own image. \n -OVSeg could perform open vocabulary segmentation, you may input more classes (seperate by comma). -""" - -article = """ -

            - -Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP - -| -Github Repo

            -""" - -gr.Interface( - inference, - inputs=[ - gr.inputs.Textbox( - lines=1, placeholder=None, default='', label='class names'), - gr.inputs.Image(type='filepath') - ], - outputs=gr.outputs.Image(label='segmentation map'), - title=title, - description=description, - article=article, - examples=examples).launch(enable_queue=True) diff --git a/spaces/jgurzoni/image_background_swapper/saicinpainting/training/data/__init__.py b/spaces/jgurzoni/image_background_swapper/saicinpainting/training/data/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/jharrison27/StoryWritingTransformers/app.py b/spaces/jharrison27/StoryWritingTransformers/app.py deleted file mode 100644 index ddc65f3de41702c8da214f25de21d9b193c5a5f3..0000000000000000000000000000000000000000 --- a/spaces/jharrison27/StoryWritingTransformers/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import gradio as gr -import transformers as tr -import numpy as np - -generator1 = gr.Interface.load("huggingface/gpt2-large") -generator2 = gr.Interface.load("huggingface/EleutherAI/gpt-neo-2.7B") -generator3 = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B") - - -demo = gr.Blocks() - -def f1(x): - return generator1(x) -def f2(x): - return generator2(x) -def f3(x): - return generator3(x) - - -with demo: - textIn = gr.Textbox() - textOut1 = gr.Textbox() - textOut2 = gr.Textbox() - textOut3 = gr.Textbox() - - b1 = gr.Button("gpt2-large") - b2 = gr.Button("gpt-neo-2.7B") - b3 = gr.Button("gpt-j-6B") - - b1.click(f1, inputs=textIn, outputs=textOut1 ) - b2.click(f2, inputs=textIn, outputs=textOut2 ) - b3.click(f3, inputs=textIn, outputs=textOut3 ) - -demo.launch() \ No newline at end of file diff --git a/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/preprocess/brenda_kcat_clean.py b/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/preprocess/brenda_kcat_clean.py deleted file mode 100644 index 8df1b15586c5333c763669955fd1714fcc99b547..0000000000000000000000000000000000000000 --- a/spaces/jie1/succ1/DLKcat/DeeplearningApproach/Code/preprocess/brenda_kcat_clean.py +++ /dev/null @@ -1,72 +0,0 @@ -#!/usr/bin/python -# coding: utf-8 - -# Author: LE YUAN -# Date: 2020-07-09 Run in python 3.7 -# This script is to clean Kcat data extracted from BRENDA database - -import csv - -with open("../../Data/database/Kcat_brenda.tsv", "r", encoding='utf-8') as file : - lines = file.readlines()[1:] - -Kcat_data = list() -Kcat_data_include_value = list() -for line in lines : - # print(line) - data = line.strip().split('\t') - Type = data[1] - ECNumber = data[2] - Substrate = data[3] - EnzymeType = data[4] - Organism =data[5] - Value = data[6] - Unit = data[7] - Kcat_data_include_value.append([Type, ECNumber, Substrate, EnzymeType, Organism, Value, Unit]) - Kcat_data.append([Type, ECNumber, Substrate, EnzymeType, Organism]) - -print(len(Kcat_data)) # 69140, in which 22723 mutant 46417 wildtype - -new_lines = list() -for line in Kcat_data : - if line not in new_lines : - new_lines.append(line) - -print(len(new_lines)) # 67566 included all elements, 52390 included all except for Kcat value and unit, 32305 if further not include enzymeType - -i = 0 -clean_Kcat = list() -for new_line in new_lines : - # print(new_line) - i += 1 - print(i) - value_unit = dict() - Kcat_values = list() - for line in Kcat_data_include_value : - if line[:-2] == new_line : - value = line[-2] - value_unit[str(float(value))] = line[-1] - # print(type(value)) # - Kcat_values.append(float(value)) - # print(value_unit) - # print(Kcat_values) - max_value = max(Kcat_values) # choose the maximum one for duplication Kcat value under the same entry as the data what we use - unit = value_unit[str(max_value)] - # print(max_value) - # print(unit) - - new_line.append(str(max_value)) - new_line.append(unit) - if new_line[-1] == 's^(-1)' : - clean_Kcat.append(new_line) - -# print(clean_Kcat) -print(len(clean_Kcat)) # 52390 - - -with open("../../Data/database/Kcat_brenda_clean.tsv", "w") as outfile : - records = ['Type', 'ECNumber', 'Substrate', 'EnzymeType', 'Organism', 'Value', 'Unit'] - outfile.write('\t'.join(records) + '\n') - for line in clean_Kcat : - outfile.write('\t'.join(line) + '\n') - diff --git a/spaces/jinhybr/OCR-layoutLM-Demo/README.md b/spaces/jinhybr/OCR-layoutLM-Demo/README.md deleted file mode 100644 index e4c480045b0ebad4b71472dba0717a457c774378..0000000000000000000000000000000000000000 --- a/spaces/jinhybr/OCR-layoutLM-Demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: OCR LayoutLM Demo -emoji: 👁 -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 3.8.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jkang/demo-image-completion/README.md b/spaces/jkang/demo-image-completion/README.md deleted file mode 100644 index 60cd0db12af0b23daa0368fc403f29fdabe898dc..0000000000000000000000000000000000000000 --- a/spaces/jkang/demo-image-completion/README.md +++ /dev/null @@ -1,50 +0,0 @@ ---- -title: Demo Image Completion -emoji: 🏢 -colorFrom: purple -colorTo: indigo -sdk: gradio -app_file: gradio_imagecompletion.py -pinned: false ---- - -# Note - -- 주어진 이미지의 하단부 절반을 지우고 새로 채워서 그려주는 AI 데모입니다 -- ImageGPT 활용 (예제만 변경) - - Paper: https://arxiv.org/abs/2109.10282 - - Code: https://huggingface.co/spaces/nielsr/imagegpt-completion - -- This repo is a replication of [https://huggingface.co/spaces/nielsr/imagegpt-completion](https://huggingface.co/spaces/nielsr/imagegpt-completion) for personal use -- Hosted link: https://huggingface.co/spaces/jkang/demo-image-completion. - -# Log -- 2021-12-10 first created - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/ImagePalette.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/ImagePalette.py deleted file mode 100644 index f0c094708634ecdac25eab95d054f7a63f14eecf..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/ImagePalette.py +++ /dev/null @@ -1,266 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# image palette object -# -# History: -# 1996-03-11 fl Rewritten. -# 1997-01-03 fl Up and running. -# 1997-08-23 fl Added load hack -# 2001-04-16 fl Fixed randint shadow bug in random() -# -# Copyright (c) 1997-2001 by Secret Labs AB -# Copyright (c) 1996-1997 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -import array - -from . import GimpGradientFile, GimpPaletteFile, ImageColor, PaletteFile - - -class ImagePalette: - """ - Color palette for palette mapped images - - :param mode: The mode to use for the palette. See: - :ref:`concept-modes`. Defaults to "RGB" - :param palette: An optional palette. If given, it must be a bytearray, - an array or a list of ints between 0-255. The list must consist of - all channels for one color followed by the next color (e.g. RGBRGBRGB). - Defaults to an empty palette. - """ - - def __init__(self, mode="RGB", palette=None): - self.mode = mode - self.rawmode = None # if set, palette contains raw data - self.palette = palette or bytearray() - self.dirty = None - - @property - def palette(self): - return self._palette - - @palette.setter - def palette(self, palette): - self._colors = None - self._palette = palette - - @property - def colors(self): - if self._colors is None: - mode_len = len(self.mode) - self._colors = {} - for i in range(0, len(self.palette), mode_len): - color = tuple(self.palette[i : i + mode_len]) - if color in self._colors: - continue - self._colors[color] = i // mode_len - return self._colors - - @colors.setter - def colors(self, colors): - self._colors = colors - - def copy(self): - new = ImagePalette() - - new.mode = self.mode - new.rawmode = self.rawmode - if self.palette is not None: - new.palette = self.palette[:] - new.dirty = self.dirty - - return new - - def getdata(self): - """ - Get palette contents in format suitable for the low-level - ``im.putpalette`` primitive. - - .. warning:: This method is experimental. - """ - if self.rawmode: - return self.rawmode, self.palette - return self.mode, self.tobytes() - - def tobytes(self): - """Convert palette to bytes. - - .. warning:: This method is experimental. - """ - if self.rawmode: - msg = "palette contains raw palette data" - raise ValueError(msg) - if isinstance(self.palette, bytes): - return self.palette - arr = array.array("B", self.palette) - return arr.tobytes() - - # Declare tostring as an alias for tobytes - tostring = tobytes - - def getcolor(self, color, image=None): - """Given an rgb tuple, allocate palette entry. - - .. warning:: This method is experimental. - """ - if self.rawmode: - msg = "palette contains raw palette data" - raise ValueError(msg) - if isinstance(color, tuple): - if self.mode == "RGB": - if len(color) == 4: - if color[3] != 255: - msg = "cannot add non-opaque RGBA color to RGB palette" - raise ValueError(msg) - color = color[:3] - elif self.mode == "RGBA": - if len(color) == 3: - color += (255,) - try: - return self.colors[color] - except KeyError as e: - # allocate new color slot - if not isinstance(self.palette, bytearray): - self._palette = bytearray(self.palette) - index = len(self.palette) // 3 - special_colors = () - if image: - special_colors = ( - image.info.get("background"), - image.info.get("transparency"), - ) - while index in special_colors: - index += 1 - if index >= 256: - if image: - # Search for an unused index - for i, count in reversed(list(enumerate(image.histogram()))): - if count == 0 and i not in special_colors: - index = i - break - if index >= 256: - msg = "cannot allocate more than 256 colors" - raise ValueError(msg) from e - self.colors[color] = index - if index * 3 < len(self.palette): - self._palette = ( - self.palette[: index * 3] - + bytes(color) - + self.palette[index * 3 + 3 :] - ) - else: - self._palette += bytes(color) - self.dirty = 1 - return index - else: - msg = f"unknown color specifier: {repr(color)}" - raise ValueError(msg) - - def save(self, fp): - """Save palette to text file. - - .. warning:: This method is experimental. - """ - if self.rawmode: - msg = "palette contains raw palette data" - raise ValueError(msg) - if isinstance(fp, str): - fp = open(fp, "w") - fp.write("# Palette\n") - fp.write(f"# Mode: {self.mode}\n") - for i in range(256): - fp.write(f"{i}") - for j in range(i * len(self.mode), (i + 1) * len(self.mode)): - try: - fp.write(f" {self.palette[j]}") - except IndexError: - fp.write(" 0") - fp.write("\n") - fp.close() - - -# -------------------------------------------------------------------- -# Internal - - -def raw(rawmode, data): - palette = ImagePalette() - palette.rawmode = rawmode - palette.palette = data - palette.dirty = 1 - return palette - - -# -------------------------------------------------------------------- -# Factories - - -def make_linear_lut(black, white): - lut = [] - if black == 0: - for i in range(256): - lut.append(white * i // 255) - else: - raise NotImplementedError # FIXME - return lut - - -def make_gamma_lut(exp): - lut = [] - for i in range(256): - lut.append(int(((i / 255.0) ** exp) * 255.0 + 0.5)) - return lut - - -def negative(mode="RGB"): - palette = list(range(256 * len(mode))) - palette.reverse() - return ImagePalette(mode, [i // len(mode) for i in palette]) - - -def random(mode="RGB"): - from random import randint - - palette = [] - for i in range(256 * len(mode)): - palette.append(randint(0, 255)) - return ImagePalette(mode, palette) - - -def sepia(white="#fff0c0"): - bands = [make_linear_lut(0, band) for band in ImageColor.getrgb(white)] - return ImagePalette("RGB", [bands[i % 3][i // 3] for i in range(256 * 3)]) - - -def wedge(mode="RGB"): - palette = list(range(256 * len(mode))) - return ImagePalette(mode, [i // len(mode) for i in palette]) - - -def load(filename): - # FIXME: supports GIMP gradients only - - with open(filename, "rb") as fp: - for paletteHandler in [ - GimpPaletteFile.GimpPaletteFile, - GimpGradientFile.GimpGradientFile, - PaletteFile.PaletteFile, - ]: - try: - fp.seek(0) - lut = paletteHandler(fp).getpalette() - if lut: - break - except (SyntaxError, ValueError): - # import traceback - # traceback.print_exc() - pass - else: - msg = "cannot load palette" - raise OSError(msg) - - return lut # data, rawmode diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/PyAccess.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/PyAccess.py deleted file mode 100644 index 99b46a4a66c013afc08edf134384e7a1d4dc200a..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/PIL/PyAccess.py +++ /dev/null @@ -1,363 +0,0 @@ -# -# The Python Imaging Library -# Pillow fork -# -# Python implementation of the PixelAccess Object -# -# Copyright (c) 1997-2009 by Secret Labs AB. All rights reserved. -# Copyright (c) 1995-2009 by Fredrik Lundh. -# Copyright (c) 2013 Eric Soroos -# -# See the README file for information on usage and redistribution -# - -# Notes: -# -# * Implements the pixel access object following Access.c -# * Taking only the tuple form, which is used from python. -# * Fill.c uses the integer form, but it's still going to use the old -# Access.c implementation. -# - -import logging -import sys - -from ._deprecate import deprecate - -try: - from cffi import FFI - - defs = """ - struct Pixel_RGBA { - unsigned char r,g,b,a; - }; - struct Pixel_I16 { - unsigned char l,r; - }; - """ - ffi = FFI() - ffi.cdef(defs) -except ImportError as ex: - # Allow error import for doc purposes, but error out when accessing - # anything in core. - from ._util import DeferredError - - FFI = ffi = DeferredError(ex) - -logger = logging.getLogger(__name__) - - -class PyAccess: - def __init__(self, img, readonly=False): - deprecate("PyAccess", 11) - vals = dict(img.im.unsafe_ptrs) - self.readonly = readonly - self.image8 = ffi.cast("unsigned char **", vals["image8"]) - self.image32 = ffi.cast("int **", vals["image32"]) - self.image = ffi.cast("unsigned char **", vals["image"]) - self.xsize, self.ysize = img.im.size - self._img = img - - # Keep pointer to im object to prevent dereferencing. - self._im = img.im - if self._im.mode in ("P", "PA"): - self._palette = img.palette - - # Debugging is polluting test traces, only useful here - # when hacking on PyAccess - # logger.debug("%s", vals) - self._post_init() - - def _post_init(self): - pass - - def __setitem__(self, xy, color): - """ - Modifies the pixel at x,y. The color is given as a single - numerical value for single band images, and a tuple for - multi-band images - - :param xy: The pixel coordinate, given as (x, y). See - :ref:`coordinate-system`. - :param color: The pixel value. - """ - if self.readonly: - msg = "Attempt to putpixel a read only image" - raise ValueError(msg) - (x, y) = xy - if x < 0: - x = self.xsize + x - if y < 0: - y = self.ysize + y - (x, y) = self.check_xy((x, y)) - - if ( - self._im.mode in ("P", "PA") - and isinstance(color, (list, tuple)) - and len(color) in [3, 4] - ): - # RGB or RGBA value for a P or PA image - if self._im.mode == "PA": - alpha = color[3] if len(color) == 4 else 255 - color = color[:3] - color = self._palette.getcolor(color, self._img) - if self._im.mode == "PA": - color = (color, alpha) - - return self.set_pixel(x, y, color) - - def __getitem__(self, xy): - """ - Returns the pixel at x,y. The pixel is returned as a single - value for single band images or a tuple for multiple band - images - - :param xy: The pixel coordinate, given as (x, y). See - :ref:`coordinate-system`. - :returns: a pixel value for single band images, a tuple of - pixel values for multiband images. - """ - (x, y) = xy - if x < 0: - x = self.xsize + x - if y < 0: - y = self.ysize + y - (x, y) = self.check_xy((x, y)) - return self.get_pixel(x, y) - - putpixel = __setitem__ - getpixel = __getitem__ - - def check_xy(self, xy): - (x, y) = xy - if not (0 <= x < self.xsize and 0 <= y < self.ysize): - msg = "pixel location out of range" - raise ValueError(msg) - return xy - - -class _PyAccess32_2(PyAccess): - """PA, LA, stored in first and last bytes of a 32 bit word""" - - def _post_init(self, *args, **kwargs): - self.pixels = ffi.cast("struct Pixel_RGBA **", self.image32) - - def get_pixel(self, x, y): - pixel = self.pixels[y][x] - return pixel.r, pixel.a - - def set_pixel(self, x, y, color): - pixel = self.pixels[y][x] - # tuple - pixel.r = min(color[0], 255) - pixel.a = min(color[1], 255) - - -class _PyAccess32_3(PyAccess): - """RGB and friends, stored in the first three bytes of a 32 bit word""" - - def _post_init(self, *args, **kwargs): - self.pixels = ffi.cast("struct Pixel_RGBA **", self.image32) - - def get_pixel(self, x, y): - pixel = self.pixels[y][x] - return pixel.r, pixel.g, pixel.b - - def set_pixel(self, x, y, color): - pixel = self.pixels[y][x] - # tuple - pixel.r = min(color[0], 255) - pixel.g = min(color[1], 255) - pixel.b = min(color[2], 255) - pixel.a = 255 - - -class _PyAccess32_4(PyAccess): - """RGBA etc, all 4 bytes of a 32 bit word""" - - def _post_init(self, *args, **kwargs): - self.pixels = ffi.cast("struct Pixel_RGBA **", self.image32) - - def get_pixel(self, x, y): - pixel = self.pixels[y][x] - return pixel.r, pixel.g, pixel.b, pixel.a - - def set_pixel(self, x, y, color): - pixel = self.pixels[y][x] - # tuple - pixel.r = min(color[0], 255) - pixel.g = min(color[1], 255) - pixel.b = min(color[2], 255) - pixel.a = min(color[3], 255) - - -class _PyAccess8(PyAccess): - """1, L, P, 8 bit images stored as uint8""" - - def _post_init(self, *args, **kwargs): - self.pixels = self.image8 - - def get_pixel(self, x, y): - return self.pixels[y][x] - - def set_pixel(self, x, y, color): - try: - # integer - self.pixels[y][x] = min(color, 255) - except TypeError: - # tuple - self.pixels[y][x] = min(color[0], 255) - - -class _PyAccessI16_N(PyAccess): - """I;16 access, native bitendian without conversion""" - - def _post_init(self, *args, **kwargs): - self.pixels = ffi.cast("unsigned short **", self.image) - - def get_pixel(self, x, y): - return self.pixels[y][x] - - def set_pixel(self, x, y, color): - try: - # integer - self.pixels[y][x] = min(color, 65535) - except TypeError: - # tuple - self.pixels[y][x] = min(color[0], 65535) - - -class _PyAccessI16_L(PyAccess): - """I;16L access, with conversion""" - - def _post_init(self, *args, **kwargs): - self.pixels = ffi.cast("struct Pixel_I16 **", self.image) - - def get_pixel(self, x, y): - pixel = self.pixels[y][x] - return pixel.l + pixel.r * 256 - - def set_pixel(self, x, y, color): - pixel = self.pixels[y][x] - try: - color = min(color, 65535) - except TypeError: - color = min(color[0], 65535) - - pixel.l = color & 0xFF # noqa: E741 - pixel.r = color >> 8 - - -class _PyAccessI16_B(PyAccess): - """I;16B access, with conversion""" - - def _post_init(self, *args, **kwargs): - self.pixels = ffi.cast("struct Pixel_I16 **", self.image) - - def get_pixel(self, x, y): - pixel = self.pixels[y][x] - return pixel.l * 256 + pixel.r - - def set_pixel(self, x, y, color): - pixel = self.pixels[y][x] - try: - color = min(color, 65535) - except Exception: - color = min(color[0], 65535) - - pixel.l = color >> 8 # noqa: E741 - pixel.r = color & 0xFF - - -class _PyAccessI32_N(PyAccess): - """Signed Int32 access, native endian""" - - def _post_init(self, *args, **kwargs): - self.pixels = self.image32 - - def get_pixel(self, x, y): - return self.pixels[y][x] - - def set_pixel(self, x, y, color): - self.pixels[y][x] = color - - -class _PyAccessI32_Swap(PyAccess): - """I;32L/B access, with byteswapping conversion""" - - def _post_init(self, *args, **kwargs): - self.pixels = self.image32 - - def reverse(self, i): - orig = ffi.new("int *", i) - chars = ffi.cast("unsigned char *", orig) - chars[0], chars[1], chars[2], chars[3] = chars[3], chars[2], chars[1], chars[0] - return ffi.cast("int *", chars)[0] - - def get_pixel(self, x, y): - return self.reverse(self.pixels[y][x]) - - def set_pixel(self, x, y, color): - self.pixels[y][x] = self.reverse(color) - - -class _PyAccessF(PyAccess): - """32 bit float access""" - - def _post_init(self, *args, **kwargs): - self.pixels = ffi.cast("float **", self.image32) - - def get_pixel(self, x, y): - return self.pixels[y][x] - - def set_pixel(self, x, y, color): - try: - # not a tuple - self.pixels[y][x] = color - except TypeError: - # tuple - self.pixels[y][x] = color[0] - - -mode_map = { - "1": _PyAccess8, - "L": _PyAccess8, - "P": _PyAccess8, - "I;16N": _PyAccessI16_N, - "LA": _PyAccess32_2, - "La": _PyAccess32_2, - "PA": _PyAccess32_2, - "RGB": _PyAccess32_3, - "LAB": _PyAccess32_3, - "HSV": _PyAccess32_3, - "YCbCr": _PyAccess32_3, - "RGBA": _PyAccess32_4, - "RGBa": _PyAccess32_4, - "RGBX": _PyAccess32_4, - "CMYK": _PyAccess32_4, - "F": _PyAccessF, - "I": _PyAccessI32_N, -} - -if sys.byteorder == "little": - mode_map["I;16"] = _PyAccessI16_N - mode_map["I;16L"] = _PyAccessI16_N - mode_map["I;16B"] = _PyAccessI16_B - - mode_map["I;32L"] = _PyAccessI32_N - mode_map["I;32B"] = _PyAccessI32_Swap -else: - mode_map["I;16"] = _PyAccessI16_L - mode_map["I;16L"] = _PyAccessI16_L - mode_map["I;16B"] = _PyAccessI16_N - - mode_map["I;32L"] = _PyAccessI32_Swap - mode_map["I;32B"] = _PyAccessI32_N - - -def new(img, readonly=False): - access_type = mode_map.get(img.mode, None) - if not access_type: - logger.debug("PyAccess Not Implemented: %s", img.mode) - return None - return access_type(img, readonly) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rcode.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rcode.py deleted file mode 100644 index 8e6386f828019b379bbe97a3950ce604c4778f7f..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rcode.py +++ /dev/null @@ -1,168 +0,0 @@ -# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license - -# Copyright (C) 2001-2017 Nominum, Inc. -# -# Permission to use, copy, modify, and distribute this software and its -# documentation for any purpose with or without fee is hereby granted, -# provided that the above copyright notice and this permission notice -# appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES -# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF -# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR -# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES -# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN -# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT -# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -"""DNS Result Codes.""" - -from typing import Tuple - -import dns.enum -import dns.exception - - -class Rcode(dns.enum.IntEnum): - #: No error - NOERROR = 0 - #: Format error - FORMERR = 1 - #: Server failure - SERVFAIL = 2 - #: Name does not exist ("Name Error" in RFC 1025 terminology). - NXDOMAIN = 3 - #: Not implemented - NOTIMP = 4 - #: Refused - REFUSED = 5 - #: Name exists. - YXDOMAIN = 6 - #: RRset exists. - YXRRSET = 7 - #: RRset does not exist. - NXRRSET = 8 - #: Not authoritative. - NOTAUTH = 9 - #: Name not in zone. - NOTZONE = 10 - #: DSO-TYPE Not Implemented - DSOTYPENI = 11 - #: Bad EDNS version. - BADVERS = 16 - #: TSIG Signature Failure - BADSIG = 16 - #: Key not recognized. - BADKEY = 17 - #: Signature out of time window. - BADTIME = 18 - #: Bad TKEY Mode. - BADMODE = 19 - #: Duplicate key name. - BADNAME = 20 - #: Algorithm not supported. - BADALG = 21 - #: Bad Truncation - BADTRUNC = 22 - #: Bad/missing Server Cookie - BADCOOKIE = 23 - - @classmethod - def _maximum(cls): - return 4095 - - @classmethod - def _unknown_exception_class(cls): - return UnknownRcode - - -class UnknownRcode(dns.exception.DNSException): - """A DNS rcode is unknown.""" - - -def from_text(text: str) -> Rcode: - """Convert text into an rcode. - - *text*, a ``str``, the textual rcode or an integer in textual form. - - Raises ``dns.rcode.UnknownRcode`` if the rcode mnemonic is unknown. - - Returns a ``dns.rcode.Rcode``. - """ - - return Rcode.from_text(text) - - -def from_flags(flags: int, ednsflags: int) -> Rcode: - """Return the rcode value encoded by flags and ednsflags. - - *flags*, an ``int``, the DNS flags field. - - *ednsflags*, an ``int``, the EDNS flags field. - - Raises ``ValueError`` if rcode is < 0 or > 4095 - - Returns a ``dns.rcode.Rcode``. - """ - - value = (flags & 0x000F) | ((ednsflags >> 20) & 0xFF0) - return Rcode.make(value) - - -def to_flags(value: Rcode) -> Tuple[int, int]: - """Return a (flags, ednsflags) tuple which encodes the rcode. - - *value*, a ``dns.rcode.Rcode``, the rcode. - - Raises ``ValueError`` if rcode is < 0 or > 4095. - - Returns an ``(int, int)`` tuple. - """ - - if value < 0 or value > 4095: - raise ValueError("rcode must be >= 0 and <= 4095") - v = value & 0xF - ev = (value & 0xFF0) << 20 - return (v, ev) - - -def to_text(value: Rcode, tsig: bool = False) -> str: - """Convert rcode into text. - - *value*, a ``dns.rcode.Rcode``, the rcode. - - Raises ``ValueError`` if rcode is < 0 or > 4095. - - Returns a ``str``. - """ - - if tsig and value == Rcode.BADVERS: - return "BADSIG" - return Rcode.to_text(value) - - -### BEGIN generated Rcode constants - -NOERROR = Rcode.NOERROR -FORMERR = Rcode.FORMERR -SERVFAIL = Rcode.SERVFAIL -NXDOMAIN = Rcode.NXDOMAIN -NOTIMP = Rcode.NOTIMP -REFUSED = Rcode.REFUSED -YXDOMAIN = Rcode.YXDOMAIN -YXRRSET = Rcode.YXRRSET -NXRRSET = Rcode.NXRRSET -NOTAUTH = Rcode.NOTAUTH -NOTZONE = Rcode.NOTZONE -DSOTYPENI = Rcode.DSOTYPENI -BADVERS = Rcode.BADVERS -BADSIG = Rcode.BADSIG -BADKEY = Rcode.BADKEY -BADTIME = Rcode.BADTIME -BADMODE = Rcode.BADMODE -BADNAME = Rcode.BADNAME -BADALG = Rcode.BADALG -BADTRUNC = Rcode.BADTRUNC -BADCOOKIE = Rcode.BADCOOKIE - -### END generated Rcode constants diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/weaviate/data_structs.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/weaviate/data_structs.py deleted file mode 100644 index 212fbf1c6de64add4faed18f600996c6efe16e30..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/readers/weaviate/data_structs.py +++ /dev/null @@ -1,277 +0,0 @@ -"""Weaviate-specific serializers for LlamaIndex data structures. - -Contain conversion to and from dataclasses that LlamaIndex uses. - -""" - -import json -from abc import abstractmethod -from typing import Any, Dict, Generic, List, Optional, TypeVar, cast - -from gpt_index.data_structs.data_structs import IndexStruct, Node -from gpt_index.readers.weaviate.utils import ( - get_by_id, - parse_get_response, - validate_client, -) -from gpt_index.utils import get_new_id - -IS = TypeVar("IS", bound=IndexStruct) - - -class BaseWeaviateIndexStruct(Generic[IS]): - """Base Weaviate index struct.""" - - @classmethod - @abstractmethod - def _class_name(cls, class_prefix: str) -> str: - """Return class name.""" - - @classmethod - def _get_common_properties(cls) -> List[Dict]: - """Get common properties.""" - return [ - { - "dataType": ["string"], - "description": "Text property", - "name": "text", - }, - { - "dataType": ["string"], - "description": "Document id", - "name": "doc_id", - }, - { - "dataType": ["string"], - "description": "extra_info (in JSON)", - "name": "extra_info", - }, - ] - - @classmethod - @abstractmethod - def _get_properties(cls) -> List[Dict]: - """Get properties specific to each index struct. - - Used in creating schema. - - """ - - @classmethod - def _get_by_id(cls, client: Any, object_id: str, class_prefix: str) -> Dict: - """Get entry by id.""" - validate_client(client) - class_name = cls._class_name(class_prefix) - properties = cls._get_common_properties() + cls._get_properties() - prop_names = [p["name"] for p in properties] - entry = get_by_id(client, object_id, class_name, prop_names) - return entry - - @classmethod - def create_schema(cls, client: Any, class_prefix: str) -> None: - """Create schema.""" - validate_client(client) - # first check if schema exists - schema = client.schema.get() - classes = schema["classes"] - existing_class_names = {c["class"] for c in classes} - # if schema already exists, don't create - class_name = cls._class_name(class_prefix) - if class_name in existing_class_names: - return - - # get common properties - properties = cls._get_common_properties() - # get specific properties - properties.extend(cls._get_properties()) - class_obj = { - "class": cls._class_name(class_prefix), # <= note the capital "A". - "description": f"Class for {class_name}", - "properties": properties, - } - client.schema.create_class(class_obj) - - @classmethod - @abstractmethod - def _entry_to_gpt_index(cls, entry: Dict) -> IS: - """Convert to LlamaIndex list.""" - - @classmethod - def to_gpt_index_list( - cls, - client: Any, - class_prefix: str, - vector: Optional[List[float]] = None, - object_limit: Optional[int] = None, - ) -> List[IS]: - """Convert to LlamaIndex list.""" - validate_client(client) - class_name = cls._class_name(class_prefix) - properties = cls._get_common_properties() + cls._get_properties() - prop_names = [p["name"] for p in properties] - query = client.query.get(class_name, prop_names).with_additional( - ["id", "vector"] - ) - if vector is not None: - query = query.with_near_vector( - { - "vector": vector, - } - ) - if object_limit is not None: - query = query.with_limit(object_limit) - query_result = query.do() - parsed_result = parse_get_response(query_result) - entries = parsed_result[class_name] - - results: List[IS] = [] - for entry in entries: - results.append(cls._entry_to_gpt_index(entry)) - - return results - - @classmethod - @abstractmethod - def _from_gpt_index( - cls, client: Any, index: IS, class_prefix: str, batch: Optional[Any] = None - ) -> str: - """Convert from LlamaIndex.""" - - @classmethod - def from_gpt_index(cls, client: Any, index: IS, class_prefix: str) -> str: - """Convert from LlamaIndex.""" - validate_client(client) - index_id = cls._from_gpt_index(client, index, class_prefix) - client.batch.flush() - return index_id - - -class WeaviateNode(BaseWeaviateIndexStruct[Node]): - """Weaviate node.""" - - @classmethod - def _class_name(cls, class_prefix: str) -> str: - """Return class name.""" - return f"{class_prefix}_Node" - - @classmethod - def _get_properties(cls) -> List[Dict]: - """Create schema.""" - return [ - { - "dataType": ["int"], - "description": "The index of the Node", - "name": "index", - }, - { - "dataType": ["int[]"], - "description": "The child_indices of the Node", - "name": "child_indices", - }, - { - "dataType": ["string"], - "description": "The ref_doc_id of the Node", - "name": "ref_doc_id", - }, - { - "dataType": ["string"], - "description": "node_info (in JSON)", - "name": "node_info", - }, - ] - - @classmethod - def _entry_to_gpt_index(cls, entry: Dict) -> Node: - """Convert to LlamaIndex list.""" - extra_info_str = entry["extra_info"] - if extra_info_str == "": - extra_info = None - else: - extra_info = json.loads(extra_info_str) - - node_info_str = entry["node_info"] - if node_info_str == "": - node_info = None - else: - node_info = json.loads(node_info_str) - return Node( - text=entry["text"], - doc_id=entry["doc_id"], - index=int(entry["index"]), - child_indices=entry["child_indices"], - ref_doc_id=entry["ref_doc_id"], - embedding=entry["_additional"]["vector"], - extra_info=extra_info, - node_info=node_info, - ) - - @classmethod - def _from_gpt_index( - cls, client: Any, node: Node, class_prefix: str, batch: Optional[Any] = None - ) -> str: - """Convert from LlamaIndex.""" - node_dict = node.to_dict() - vector = node_dict.pop("embedding") - extra_info = node_dict.pop("extra_info") - # json-serialize the extra_info - extra_info_str = "" - if extra_info is not None: - extra_info_str = json.dumps(extra_info) - node_dict["extra_info"] = extra_info_str - # json-serialize the node_info - node_info = node_dict.pop("node_info") - node_info_str = "" - if node_info is not None: - node_info_str = json.dumps(node_info) - node_dict["node_info"] = node_info_str - - # TODO: account for existing nodes that are stored - node_id = get_new_id(set()) - class_name = cls._class_name(class_prefix) - - # if batch object is provided (via a contexxt manager), use that instead - if batch is not None: - batch.add_data_object(node_dict, class_name, node_id, vector) - else: - client.batch.add_data_object(node_dict, class_name, node_id, vector) - - return node_id - - @classmethod - def delete_document(cls, client: Any, ref_doc_id: str, class_prefix: str) -> None: - """Delete entry.""" - validate_client(client) - # make sure that each entry - class_name = cls._class_name(class_prefix) - where_filter = { - "path": ["ref_doc_id"], - "operator": "Equal", - "valueString": ref_doc_id, - } - query = ( - client.query.get(class_name) - .with_additional(["id"]) - .with_where(where_filter) - ) - - query_result = query.do() - parsed_result = parse_get_response(query_result) - entries = parsed_result[class_name] - for entry in entries: - client.data_object.delete(entry["_additional"]["id"], class_name) - - @classmethod - def from_gpt_index_batch( - cls, client: Any, nodes: List[Node], class_prefix: str - ) -> List[str]: - """Convert from gpt index.""" - from weaviate import Client # noqa: F401 - - client = cast(Client, client) - validate_client(client) - index_ids = [] - with client.batch as batch: - for node in nodes: - index_id = cls._from_gpt_index(client, node, class_prefix, batch=batch) - index_ids.append(index_id) - return index_ids diff --git a/spaces/johnslegers/stable-diffusion-gui-test/app.bckp2.py b/spaces/johnslegers/stable-diffusion-gui-test/app.bckp2.py deleted file mode 100644 index 20ad607b4447558923ce3dad198bf2782bb92112..0000000000000000000000000000000000000000 --- a/spaces/johnslegers/stable-diffusion-gui-test/app.bckp2.py +++ /dev/null @@ -1,330 +0,0 @@ -# app.py -import uvicorn - -import json -import traceback - -import sys -import os - -SD_DIR = os.getcwd() -print('started in ', SD_DIR) - -SD_UI_DIR = './ui' -#sys.path.append(os.path.dirname(SD_UI_DIR)) - -#CONFIG_DIR = os.path.abspath(os.path.join(SD_UI_DIR, '..', 'scripts')) -#MODELS_DIR = os.path.abspath(os.path.join(SD_DIR, '..', 'models')) - -OUTPUT_DIRNAME = "Stable Diffusion UI" # in the user's home folder - -from fastapi import FastAPI, HTTPException -from fastapi.staticfiles import StaticFiles -from starlette.responses import FileResponse, StreamingResponse -from pydantic import BaseModel -import logging - -from sd_internal import Request, Response - -app = FastAPI() - -model_loaded = False -model_is_loading = False - -modifiers_cache = None -outpath = os.path.join(os.path.expanduser("~"), OUTPUT_DIRNAME) - -# don't show access log entries for URLs that start with the given prefix -ACCESS_LOG_SUPPRESS_PATH_PREFIXES = ['/ping', '/modifier-thumbnails'] - -app.mount('/media', StaticFiles(directory=os.path.join(SD_UI_DIR, 'media/')), name="media") - -# defaults from https://huggingface.co/blog/stable_diffusion -class ImageRequest(BaseModel): - session_id: str = "session" - prompt: str = "" - negative_prompt: str = "" - init_image: str = None # base64 - mask: str = None # base64 - num_outputs: int = 1 - num_inference_steps: int = 50 - guidance_scale: float = 7.5 - width: int = 512 - height: int = 512 - seed: int = 42 - prompt_strength: float = 0.8 - sampler: str = None # "ddim", "plms", "heun", "euler", "euler_a", "dpm2", "dpm2_a", "lms" - # allow_nsfw: bool = False - save_to_disk_path: str = None - turbo: bool = True - use_cpu: bool = False - use_full_precision: bool = False - use_face_correction: str = None # or "GFPGANv1.3" - use_upscale: str = None # or "RealESRGAN_x4plus" or "RealESRGAN_x4plus_anime_6B" - use_stable_diffusion_model: str = "sd-v1-4" - show_only_filtered_image: bool = False - output_format: str = "jpeg" # or "png" - - stream_progress_updates: bool = False - stream_image_progress: bool = False - -class SetAppConfigRequest(BaseModel): - update_branch: str = "main" - -@app.get('/') -def read_root(): - headers = {"Cache-Control": "no-cache, no-store, must-revalidate", "Pragma": "no-cache", "Expires": "0"} - return FileResponse(os.path.join(SD_UI_DIR, 'index.html'), headers=headers) - -@app.get('/ping') -async def ping(): - global model_loaded, model_is_loading - - try: - if model_loaded: - return {'OK'} - - if model_is_loading: - return {'ERROR'} - - model_is_loading = True - - from sd_internal import runtime - - runtime.load_model_ckpt(ckpt_to_use=get_initial_model_to_load()) - - model_loaded = True - model_is_loading = False - - return {'OK'} - except Exception as e: - print(traceback.format_exc()) - return HTTPException(status_code=500, detail=str(e)) - -# needs to support the legacy installations -def get_initial_model_to_load(): - custom_weight_path = os.path.join(SD_DIR, 'custom-model.ckpt') - ckpt_to_use = "sd-v1-4" if not os.path.exists(custom_weight_path) else "custom-model" - - ckpt_to_use = os.path.join(SD_DIR, ckpt_to_use) - - config = getConfig() - if 'model' in config and 'stable-diffusion' in config['model']: - model_name = config['model']['stable-diffusion'] - model_path = resolve_model_to_use(model_name) - - if os.path.exists(model_path + '.ckpt'): - ckpt_to_use = model_path - else: - print('Could not find the configured custom model at:', model_path + '.ckpt', '. Using the default one:', ckpt_to_use + '.ckpt') - - return ckpt_to_use - -def resolve_model_to_use(model_name): - if model_name in ('sd-v1-4', 'custom-model'): - model_path = os.path.join(MODELS_DIR, 'stable-diffusion', model_name) - - legacy_model_path = os.path.join(SD_DIR, model_name) - if not os.path.exists(model_path + '.ckpt') and os.path.exists(legacy_model_path + '.ckpt'): - model_path = legacy_model_path - else: - model_path = os.path.join(MODELS_DIR, 'stable-diffusion', model_name) - - return model_path - -def save_model_to_config(model_name): - config = getConfig() - if 'model' not in config: - config['model'] = {} - - config['model']['stable-diffusion'] = model_name - - setConfig(config) - -@app.post('/image') -def image(req : ImageRequest): - from sd_internal import runtime - - r = Request() - r.session_id = req.session_id - r.prompt = req.prompt - r.negative_prompt = req.negative_prompt - r.init_image = req.init_image - r.mask = req.mask - r.num_outputs = req.num_outputs - r.num_inference_steps = req.num_inference_steps - r.guidance_scale = req.guidance_scale - r.width = req.width - r.height = req.height - r.seed = req.seed - r.prompt_strength = req.prompt_strength - r.sampler = req.sampler - # r.allow_nsfw = req.allow_nsfw - r.turbo = req.turbo - r.use_cpu = req.use_cpu - r.use_full_precision = req.use_full_precision - r.save_to_disk_path = req.save_to_disk_path - r.use_upscale: str = req.use_upscale - r.use_face_correction = req.use_face_correction - r.show_only_filtered_image = req.show_only_filtered_image - r.output_format = req.output_format - - r.stream_progress_updates = True # the underlying implementation only supports streaming - r.stream_image_progress = req.stream_image_progress - - r.use_stable_diffusion_model = resolve_model_to_use(req.use_stable_diffusion_model) - - save_model_to_config(req.use_stable_diffusion_model) - - try: - if not req.stream_progress_updates: - r.stream_image_progress = False - - res = runtime.mk_img(r) - - if req.stream_progress_updates: - return StreamingResponse(res, media_type='application/json') - else: # compatibility mode: buffer the streaming responses, and return the last one - last_result = None - - for result in res: - last_result = result - - return json.loads(last_result) - except Exception as e: - print(traceback.format_exc()) - return HTTPException(status_code=500, detail=str(e)) - -@app.get('/image/stop') -def stop(): - try: - if model_is_loading: - return {'ERROR'} - - from sd_internal import runtime - runtime.stop_processing = True - - return {'OK'} - except Exception as e: - print(traceback.format_exc()) - return HTTPException(status_code=500, detail=str(e)) - -@app.get('/image/tmp/{session_id}/{img_id}') -def get_image(session_id, img_id): - from sd_internal import runtime - buf = runtime.temp_images[session_id + '/' + img_id] - buf.seek(0) - return StreamingResponse(buf, media_type='image/jpeg') - -@app.post('/app_config') -async def setAppConfig(req : SetAppConfigRequest): - try: - config = { - 'update_branch': req.update_branch - } - - config_json_str = json.dumps(config) - config_bat_str = f'@set update_branch={req.update_branch}' - config_sh_str = f'export update_branch={req.update_branch}' - - config_json_path = os.path.join(CONFIG_DIR, 'config.json') - config_bat_path = os.path.join(CONFIG_DIR, 'config.bat') - config_sh_path = os.path.join(CONFIG_DIR, 'config.sh') - - with open(config_json_path, 'w') as f: - f.write(config_json_str) - - with open(config_bat_path, 'w') as f: - f.write(config_bat_str) - - with open(config_sh_path, 'w') as f: - f.write(config_sh_str) - - return {'OK'} - except Exception as e: - print(traceback.format_exc()) - return HTTPException(status_code=500, detail=str(e)) - -@app.get('/app_config') -def getAppConfig(): - try: - config_json_path = os.path.join(CONFIG_DIR, 'config.json') - - if not os.path.exists(config_json_path): - return HTTPException(status_code=500, detail="No config file") - - with open(config_json_path, 'r') as f: - return json.load(f) - except Exception as e: - print(traceback.format_exc()) - return HTTPException(status_code=500, detail=str(e)) - -def getConfig(): - try: - config_json_path = os.path.join(CONFIG_DIR, 'config.json') - - if not os.path.exists(config_json_path): - return {} - - with open(config_json_path, 'r') as f: - return json.load(f) - except Exception as e: - return {} - -def setConfig(config): - try: - config_json_path = os.path.join(CONFIG_DIR, 'config.json') - - with open(config_json_path, 'w') as f: - return json.dump(config, f) - except: - print(traceback.format_exc()) - -@app.get('/models') -def getModels(): - models = { - 'active': { - 'stable-diffusion': 'sd-v1-4', - }, - 'options': { - 'stable-diffusion': ['sd-v1-4'], - }, - } - - # custom models - sd_models_dir = os.path.join(MODELS_DIR, 'stable-diffusion') - for file in os.listdir(sd_models_dir): - if file.endswith('.ckpt'): - model_name = os.path.splitext(file)[0] - models['options']['stable-diffusion'].append(model_name) - - # legacy - custom_weight_path = os.path.join(SD_DIR, 'custom-model.ckpt') - if os.path.exists(custom_weight_path): - models['active']['stable-diffusion'] = 'custom-model' - models['options']['stable-diffusion'].append('custom-model') - - config = getConfig() - if 'model' in config and 'stable-diffusion' in config['model']: - models['active']['stable-diffusion'] = config['model']['stable-diffusion'] - - return models - -@app.get('/modifiers.json') -def read_modifiers(): - headers = {"Cache-Control": "no-cache, no-store, must-revalidate", "Pragma": "no-cache", "Expires": "0"} - return FileResponse(os.path.join(SD_UI_DIR, 'modifiers.json'), headers=headers) - -@app.get('/output_dir') -def read_home_dir(): - return {outpath} - -# don't log certain requests -class LogSuppressFilter(logging.Filter): - def filter(self, record: logging.LogRecord) -> bool: - path = record.getMessage() - for prefix in ACCESS_LOG_SUPPRESS_PATH_PREFIXES: - if path.find(prefix) != -1: - return False - - return True diff --git a/spaces/joshen/gpt-academic/crazy_functions/test_project/cpp/cppipc/shm.cpp b/spaces/joshen/gpt-academic/crazy_functions/test_project/cpp/cppipc/shm.cpp deleted file mode 100644 index 593ce3129dc1574dbc8fc8b088cf595df215de93..0000000000000000000000000000000000000000 --- a/spaces/joshen/gpt-academic/crazy_functions/test_project/cpp/cppipc/shm.cpp +++ /dev/null @@ -1,103 +0,0 @@ - -#include -#include - -#include "libipc/shm.h" - -#include "libipc/utility/pimpl.h" -#include "libipc/memory/resource.h" - -namespace ipc { -namespace shm { - -class handle::handle_ : public pimpl { -public: - shm::id_t id_ = nullptr; - void* m_ = nullptr; - - ipc::string n_; - std::size_t s_ = 0; -}; - -handle::handle() - : p_(p_->make()) { -} - -handle::handle(char const * name, std::size_t size, unsigned mode) - : handle() { - acquire(name, size, mode); -} - -handle::handle(handle&& rhs) - : handle() { - swap(rhs); -} - -handle::~handle() { - release(); - p_->clear(); -} - -void handle::swap(handle& rhs) { - std::swap(p_, rhs.p_); -} - -handle& handle::operator=(handle rhs) { - swap(rhs); - return *this; -} - -bool handle::valid() const noexcept { - return impl(p_)->m_ != nullptr; -} - -std::size_t handle::size() const noexcept { - return impl(p_)->s_; -} - -char const * handle::name() const noexcept { - return impl(p_)->n_.c_str(); -} - -std::int32_t handle::ref() const noexcept { - return shm::get_ref(impl(p_)->id_); -} - -void handle::sub_ref() noexcept { - shm::sub_ref(impl(p_)->id_); -} - -bool handle::acquire(char const * name, std::size_t size, unsigned mode) { - release(); - impl(p_)->id_ = shm::acquire((impl(p_)->n_ = name).c_str(), size, mode); - impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_)); - return valid(); -} - -std::int32_t handle::release() { - if (impl(p_)->id_ == nullptr) return -1; - return shm::release(detach()); -} - -void* handle::get() const { - return impl(p_)->m_; -} - -void handle::attach(id_t id) { - if (id == nullptr) return; - release(); - impl(p_)->id_ = id; - impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_)); -} - -id_t handle::detach() { - auto old = impl(p_)->id_; - impl(p_)->id_ = nullptr; - impl(p_)->m_ = nullptr; - impl(p_)->s_ = 0; - impl(p_)->n_.clear(); - return old; -} - -} // namespace shm -} // namespace ipc diff --git a/spaces/jt5d/kandinsky-community-kandinsky-2-2-prior/app.py b/spaces/jt5d/kandinsky-community-kandinsky-2-2-prior/app.py deleted file mode 100644 index 9772db53a456f958d634d5a4c6eb47c0a09b8f9f..0000000000000000000000000000000000000000 --- a/spaces/jt5d/kandinsky-community-kandinsky-2-2-prior/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/kandinsky-community/kandinsky-2-2-prior").launch() \ No newline at end of file diff --git a/spaces/kalebu/LangChain_heyooBot/README.md b/spaces/kalebu/LangChain_heyooBot/README.md deleted file mode 100644 index 2ea6ae51807497e50b9bfdf3e39f48b5b1c9f52d..0000000000000000000000000000000000000000 --- a/spaces/kalebu/LangChain_heyooBot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: LangChain HeyooBot -emoji: 📚 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kausmos/clothsy/api.py b/spaces/kausmos/clothsy/api.py deleted file mode 100644 index fd666972fa0e4929823afc0a819a49de13766ef7..0000000000000000000000000000000000000000 --- a/spaces/kausmos/clothsy/api.py +++ /dev/null @@ -1,27 +0,0 @@ -from fastapi import FastAPI -from pydantic import BaseModel -from sentence_transformers import SentenceTransformer -from utils.similarity import get_similar_items -import pandas as pd -import numpy as np - -clothing_data = pd.read_csv('data/clothing_data_preprocessed.csv') -model = SentenceTransformer('model') -embeddings = np.load('data/embeddings.npy') - -app = FastAPI() - -class Query(BaseModel): - query: str - -@app.post("/predict") -def getURL(query: Query): - # Get the query from the request payload - query_text = query.query - # Call your function to retrieve similar item URLs - similar_urls = get_similar_items(query_text, embeddings, clothing_data, 5) - return {"similar_urls": similar_urls} - -if __name__ == '__main__': - import uvicorn - uvicorn.run(app, host='0.0.0.0', port=8080) diff --git a/spaces/kevinwang676/Bert-VITS2/short_audio_transcribe.py b/spaces/kevinwang676/Bert-VITS2/short_audio_transcribe.py deleted file mode 100644 index f1e8b30671f2c2f2fa3c93feb1f4edd3fbe2f545..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/Bert-VITS2/short_audio_transcribe.py +++ /dev/null @@ -1,122 +0,0 @@ -import whisper -import os -import json -import torchaudio -import argparse -import torch - -lang2token = { - 'zh': "[ZH]", - 'ja': "[JA]", - "en": "[EN]", - } -def transcribe_one(audio_path): - # load audio and pad/trim it to fit 30 seconds - audio = whisper.load_audio(audio_path) - audio = whisper.pad_or_trim(audio) - - # make log-Mel spectrogram and move to the same device as the model - mel = whisper.log_mel_spectrogram(audio).to(model.device) - - # detect the spoken language - _, probs = model.detect_language(mel) - print(f"Detected language: {max(probs, key=probs.get)}") - lang = max(probs, key=probs.get) - # decode the audio - options = whisper.DecodingOptions(beam_size=5) - result = whisper.decode(model, mel, options) - - # print the recognized text - print(result.text) - return lang, result.text -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--languages", default="CJE") - parser.add_argument("--whisper_size", default="medium") - args = parser.parse_args() - if args.languages == "CJE": - lang2token = { - 'zh': "[ZH]", - 'ja': "[JA]", - "en": "[EN]", - } - elif args.languages == "CJ": - lang2token = { - 'zh': "[ZH]", - 'ja': "[JA]", - } - elif args.languages == "C": - lang2token = { - 'zh': "[ZH]", - } - assert (torch.cuda.is_available()), "Please enable GPU in order to run Whisper!" - model = whisper.load_model(args.whisper_size) - parent_dir = "./custom_character_voice/" - speaker_names = list(os.walk(parent_dir))[0][1] - speaker_annos = [] - total_files = sum([len(files) for r, d, files in os.walk(parent_dir)]) - # resample audios - # 2023/4/21: Get the target sampling rate - with open("./configs/config.json", 'r', encoding='utf-8') as f: - hps = json.load(f) - target_sr = hps['data']['sampling_rate'] - processed_files = 0 - for speaker in speaker_names: - for i, wavfile in enumerate(list(os.walk(parent_dir + speaker))[0][2]): - # try to load file as audio - if wavfile.startswith("processed_"): - continue - try: - wav, sr = torchaudio.load(parent_dir + speaker + "/" + wavfile, frame_offset=0, num_frames=-1, normalize=True, - channels_first=True) - wav = wav.mean(dim=0).unsqueeze(0) - if sr != target_sr: - wav = torchaudio.transforms.Resample(orig_freq=sr, new_freq=target_sr)(wav) - if wav.shape[1] / sr > 20: - print(f"{wavfile} too long, ignoring\n") - save_path = parent_dir + speaker + "/" + f"processed_{i}.wav" - torchaudio.save(save_path, wav, target_sr, channels_first=True) - # transcribe text - lang, text = transcribe_one(save_path) - if lang not in list(lang2token.keys()): - print(f"{lang} not supported, ignoring\n") - continue - text = "ZH|" + text + "\n"# - #text = lang2token[lang] + text + lang2token[lang] + "\n" - speaker_annos.append(save_path + "|" + speaker + "|" + text) - - processed_files += 1 - print(f"Processed: {processed_files}/{total_files}") - except: - continue - - # # clean annotation - # import argparse - # import text - # from utils import load_filepaths_and_text - # for i, line in enumerate(speaker_annos): - # path, sid, txt = line.split("|") - # cleaned_text = text._clean_text(txt, ["cjke_cleaners2"]) - # cleaned_text += "\n" if not cleaned_text.endswith("\n") else "" - # speaker_annos[i] = path + "|" + sid + "|" + cleaned_text - # write into annotation - if len(speaker_annos) == 0: - print("Warning: no short audios found, this IS expected if you have only uploaded long audios, videos or video links.") - print("this IS NOT expected if you have uploaded a zip file of short audios. Please check your file structure or make sure your audio language is supported.") - with open("./filelists/short_character_anno.list", 'w', encoding='utf-8') as f: - for line in speaker_annos: - f.write(line) - - # import json - # # generate new config - # with open("./configs/finetune_speaker.json", 'r', encoding='utf-8') as f: - # hps = json.load(f) - # # modify n_speakers - # hps['data']["n_speakers"] = 1000 + len(speaker2id) - # # add speaker names - # for speaker in speaker_names: - # hps['speakers'][speaker] = speaker2id[speaker] - # # save modified config - # with open("./configs/modified_finetune_speaker.json", 'w', encoding='utf-8') as f: - # json.dump(hps, f, indent=2) - # print("finished") diff --git a/spaces/kevinwang676/Voice-Changer-Light/README.md b/spaces/kevinwang676/Voice-Changer-Light/README.md deleted file mode 100644 index 14227ce76449f67d83b66a156b74589fbf3b2c3d..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/Voice-Changer-Light/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: VoiceChange -emoji: 👀 -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.28.3 -app_file: app_multi.py -pinned: false -license: mit -duplicated_from: kevinwang676/Voice-Changer ---- diff --git a/spaces/kevinwang676/rvc-models-new/config.py b/spaces/kevinwang676/rvc-models-new/config.py deleted file mode 100644 index 13b76fb82299416b49cedf0691f2d7e4649cb2b0..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/rvc-models-new/config.py +++ /dev/null @@ -1,126 +0,0 @@ -import argparse -import torch -from multiprocessing import cpu_count - - -class Config: - def __init__(self): - self.device = "cuda:0" - self.is_half = True - self.n_cpu = 0 - self.gpu_name = None - self.gpu_mem = None - ( - self.python_cmd, - self.listen_port, - self.iscolab, - self.noparallel, - self.noautoopen, - self.api, - self.share, - self.files - ) = self.arg_parse() - self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config() - - @staticmethod - def arg_parse() -> tuple: - parser = argparse.ArgumentParser() - parser.add_argument("--port", type=int, default=7865, help="Listen port") - parser.add_argument( - "--pycmd", type=str, default="python", help="Python command" - ) - parser.add_argument("--colab", action="store_true", help="Launch in colab") - parser.add_argument( - "--noparallel", action="store_true", help="Disable parallel processing" - ) - parser.add_argument( - "--noautoopen", - action="store_true", - help="Do not open in browser automatically", - ) - parser.add_argument('--api', action="store_true", default=False) - parser.add_argument("--share", action="store_true", default=False, help="share gradio app") - parser.add_argument("--files", action="store_true", default=False, help="load audio from path") - cmd_opts = parser.parse_args() - - cmd_opts.port = cmd_opts.port if 0 <= cmd_opts.port <= 65535 else 7865 - - return ( - cmd_opts.pycmd, - cmd_opts.port, - cmd_opts.colab, - cmd_opts.noparallel, - cmd_opts.noautoopen, - cmd_opts.api, - cmd_opts.share, - cmd_opts.files, - ) - - def device_config(self) -> tuple: - if torch.cuda.is_available(): - i_device = int(self.device.split(":")[-1]) - self.gpu_name = torch.cuda.get_device_name(i_device) - if ( - ("16" in self.gpu_name and "V100" not in self.gpu_name.upper()) - or "P40" in self.gpu_name.upper() - or "1060" in self.gpu_name - or "1070" in self.gpu_name - or "1080" in self.gpu_name - ): - print("16系/10系显卡和P40强制单精度") - self.is_half = False - for config_file in ["32k.json", "40k.json", "48k.json"]: - with open(f"configs/{config_file}", "r") as f: - strr = f.read().replace("true", "false") - with open(f"configs/{config_file}", "w") as f: - f.write(strr) - with open("trainset_preprocess_pipeline_print.py", "r") as f: - strr = f.read().replace("3.7", "3.0") - with open("trainset_preprocess_pipeline_print.py", "w") as f: - f.write(strr) - else: - self.gpu_name = None - self.gpu_mem = int( - torch.cuda.get_device_properties(i_device).total_memory - / 1024 - / 1024 - / 1024 - + 0.4 - ) - if self.gpu_mem <= 4: - with open("trainset_preprocess_pipeline_print.py", "r") as f: - strr = f.read().replace("3.7", "3.0") - with open("trainset_preprocess_pipeline_print.py", "w") as f: - f.write(strr) - elif torch.backends.mps.is_available(): - print("没有发现支持的N卡, 使用MPS进行推理") - self.device = "mps" - self.is_half = False - else: - print("没有发现支持的N卡, 使用CPU进行推理") - self.device = "cpu" - self.is_half = False - - if self.n_cpu == 0: - self.n_cpu = cpu_count() - - if self.is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 - else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 - - if self.gpu_mem != None and self.gpu_mem <= 4: - x_pad = 1 - x_query = 5 - x_center = 30 - x_max = 32 - - return x_pad, x_query, x_center, x_max \ No newline at end of file diff --git a/spaces/kquote03/lama-video-watermark-remover/saicinpainting/evaluation/utils.py b/spaces/kquote03/lama-video-watermark-remover/saicinpainting/evaluation/utils.py deleted file mode 100644 index 6d7c15c9242ed8a9bc59fbb3b450cca394720bb8..0000000000000000000000000000000000000000 --- a/spaces/kquote03/lama-video-watermark-remover/saicinpainting/evaluation/utils.py +++ /dev/null @@ -1,28 +0,0 @@ -from enum import Enum - -import yaml -from easydict import EasyDict as edict -import torch.nn as nn -import torch - - -def load_yaml(path): - with open(path, 'r') as f: - return edict(yaml.safe_load(f)) - - -def move_to_device(obj, device): - if isinstance(obj, nn.Module): - return obj.to(device) - if torch.is_tensor(obj): - return obj.to(device) - if isinstance(obj, (tuple, list)): - return [move_to_device(el, device) for el in obj] - if isinstance(obj, dict): - return {name: move_to_device(val, device) for name, val in obj.items()} - raise ValueError(f'Unexpected type {type(obj)}') - - -class SmallMode(Enum): - DROP = "drop" - UPSCALE = "upscale" diff --git a/spaces/kukuhtw/AutoGPT/CODE_OF_CONDUCT.md b/spaces/kukuhtw/AutoGPT/CODE_OF_CONDUCT.md deleted file mode 100644 index d2331b4c60b9fb27f06953273355dcf53b8d4321..0000000000000000000000000000000000000000 --- a/spaces/kukuhtw/AutoGPT/CODE_OF_CONDUCT.md +++ /dev/null @@ -1,40 +0,0 @@ -# Code of Conduct for auto-gpt - -## 1. Purpose - -The purpose of this Code of Conduct is to provide guidelines for contributors to the auto-gpt project on GitHub. We aim to create a positive and inclusive environment where all participants can contribute and collaborate effectively. By participating in this project, you agree to abide by this Code of Conduct. - -## 2. Scope - -This Code of Conduct applies to all contributors, maintainers, and users of the auto-gpt project. It extends to all project spaces, including but not limited to issues, pull requests, code reviews, comments, and other forms of communication within the project. - -## 3. Our Standards - -We encourage the following behavior: - -* Being respectful and considerate to others -* Actively seeking diverse perspectives -* Providing constructive feedback and assistance -* Demonstrating empathy and understanding - -We discourage the following behavior: - -* Harassment or discrimination of any kind -* Disrespectful, offensive, or inappropriate language or content -* Personal attacks or insults -* Unwarranted criticism or negativity - -## 4. Reporting and Enforcement - -If you witness or experience any violations of this Code of Conduct, please report them to the project maintainers by email or other appropriate means. The maintainers will investigate and take appropriate action, which may include warnings, temporary or permanent bans, or other measures as necessary. - -Maintainers are responsible for ensuring compliance with this Code of Conduct and may take action to address any violations. - -## 5. Acknowledgements - -This Code of Conduct is adapted from the [Contributor Covenant](https://www.contributor-covenant.org/version/2/0/code_of_conduct.html). - -## 6. Contact - -If you have any questions or concerns, please contact the project maintainers. - diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/anyio/from_thread.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/anyio/from_thread.py deleted file mode 100644 index e5e3fa701e220b27bd4307c0fd5dcaf4f4f13195..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/anyio/from_thread.py +++ /dev/null @@ -1,497 +0,0 @@ -from __future__ import annotations - -import threading -from asyncio import iscoroutine -from concurrent.futures import FIRST_COMPLETED, Future, ThreadPoolExecutor, wait -from contextlib import AbstractContextManager, contextmanager -from types import TracebackType -from typing import ( - Any, - AsyncContextManager, - Awaitable, - Callable, - ContextManager, - Generator, - Generic, - Iterable, - TypeVar, - cast, - overload, -) -from warnings import warn - -from ._core import _eventloop -from ._core._eventloop import get_asynclib, get_cancelled_exc_class, threadlocals -from ._core._synchronization import Event -from ._core._tasks import CancelScope, create_task_group -from .abc._tasks import TaskStatus - -T_Retval = TypeVar("T_Retval") -T_co = TypeVar("T_co") - - -def run(func: Callable[..., Awaitable[T_Retval]], *args: object) -> T_Retval: - """ - Call a coroutine function from a worker thread. - - :param func: a coroutine function - :param args: positional arguments for the callable - :return: the return value of the coroutine function - - """ - try: - asynclib = threadlocals.current_async_module - except AttributeError: - raise RuntimeError("This function can only be run from an AnyIO worker thread") - - return asynclib.run_async_from_thread(func, *args) - - -def run_async_from_thread( - func: Callable[..., Awaitable[T_Retval]], *args: object -) -> T_Retval: - warn( - "run_async_from_thread() has been deprecated, use anyio.from_thread.run() instead", - DeprecationWarning, - ) - return run(func, *args) - - -def run_sync(func: Callable[..., T_Retval], *args: object) -> T_Retval: - """ - Call a function in the event loop thread from a worker thread. - - :param func: a callable - :param args: positional arguments for the callable - :return: the return value of the callable - - """ - try: - asynclib = threadlocals.current_async_module - except AttributeError: - raise RuntimeError("This function can only be run from an AnyIO worker thread") - - return asynclib.run_sync_from_thread(func, *args) - - -def run_sync_from_thread(func: Callable[..., T_Retval], *args: object) -> T_Retval: - warn( - "run_sync_from_thread() has been deprecated, use anyio.from_thread.run_sync() instead", - DeprecationWarning, - ) - return run_sync(func, *args) - - -class _BlockingAsyncContextManager(Generic[T_co], AbstractContextManager): - _enter_future: Future - _exit_future: Future - _exit_event: Event - _exit_exc_info: tuple[ - type[BaseException] | None, BaseException | None, TracebackType | None - ] = (None, None, None) - - def __init__(self, async_cm: AsyncContextManager[T_co], portal: BlockingPortal): - self._async_cm = async_cm - self._portal = portal - - async def run_async_cm(self) -> bool | None: - try: - self._exit_event = Event() - value = await self._async_cm.__aenter__() - except BaseException as exc: - self._enter_future.set_exception(exc) - raise - else: - self._enter_future.set_result(value) - - try: - # Wait for the sync context manager to exit. - # This next statement can raise `get_cancelled_exc_class()` if - # something went wrong in a task group in this async context - # manager. - await self._exit_event.wait() - finally: - # In case of cancellation, it could be that we end up here before - # `_BlockingAsyncContextManager.__exit__` is called, and an - # `_exit_exc_info` has been set. - result = await self._async_cm.__aexit__(*self._exit_exc_info) - return result - - def __enter__(self) -> T_co: - self._enter_future = Future() - self._exit_future = self._portal.start_task_soon(self.run_async_cm) - cm = self._enter_future.result() - return cast(T_co, cm) - - def __exit__( - self, - __exc_type: type[BaseException] | None, - __exc_value: BaseException | None, - __traceback: TracebackType | None, - ) -> bool | None: - self._exit_exc_info = __exc_type, __exc_value, __traceback - self._portal.call(self._exit_event.set) - return self._exit_future.result() - - -class _BlockingPortalTaskStatus(TaskStatus): - def __init__(self, future: Future): - self._future = future - - def started(self, value: object = None) -> None: - self._future.set_result(value) - - -class BlockingPortal: - """An object that lets external threads run code in an asynchronous event loop.""" - - def __new__(cls) -> BlockingPortal: - return get_asynclib().BlockingPortal() - - def __init__(self) -> None: - self._event_loop_thread_id: int | None = threading.get_ident() - self._stop_event = Event() - self._task_group = create_task_group() - self._cancelled_exc_class = get_cancelled_exc_class() - - async def __aenter__(self) -> BlockingPortal: - await self._task_group.__aenter__() - return self - - async def __aexit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - await self.stop() - return await self._task_group.__aexit__(exc_type, exc_val, exc_tb) - - def _check_running(self) -> None: - if self._event_loop_thread_id is None: - raise RuntimeError("This portal is not running") - if self._event_loop_thread_id == threading.get_ident(): - raise RuntimeError( - "This method cannot be called from the event loop thread" - ) - - async def sleep_until_stopped(self) -> None: - """Sleep until :meth:`stop` is called.""" - await self._stop_event.wait() - - async def stop(self, cancel_remaining: bool = False) -> None: - """ - Signal the portal to shut down. - - This marks the portal as no longer accepting new calls and exits from - :meth:`sleep_until_stopped`. - - :param cancel_remaining: ``True`` to cancel all the remaining tasks, ``False`` to let them - finish before returning - - """ - self._event_loop_thread_id = None - self._stop_event.set() - if cancel_remaining: - self._task_group.cancel_scope.cancel() - - async def _call_func( - self, func: Callable, args: tuple, kwargs: dict[str, Any], future: Future - ) -> None: - def callback(f: Future) -> None: - if f.cancelled() and self._event_loop_thread_id not in ( - None, - threading.get_ident(), - ): - self.call(scope.cancel) - - try: - retval = func(*args, **kwargs) - if iscoroutine(retval): - with CancelScope() as scope: - if future.cancelled(): - scope.cancel() - else: - future.add_done_callback(callback) - - retval = await retval - except self._cancelled_exc_class: - future.cancel() - except BaseException as exc: - if not future.cancelled(): - future.set_exception(exc) - - # Let base exceptions fall through - if not isinstance(exc, Exception): - raise - else: - if not future.cancelled(): - future.set_result(retval) - finally: - scope = None # type: ignore[assignment] - - def _spawn_task_from_thread( - self, - func: Callable, - args: tuple, - kwargs: dict[str, Any], - name: object, - future: Future, - ) -> None: - """ - Spawn a new task using the given callable. - - Implementors must ensure that the future is resolved when the task finishes. - - :param func: a callable - :param args: positional arguments to be passed to the callable - :param kwargs: keyword arguments to be passed to the callable - :param name: name of the task (will be coerced to a string if not ``None``) - :param future: a future that will resolve to the return value of the callable, or the - exception raised during its execution - - """ - raise NotImplementedError - - @overload - def call(self, func: Callable[..., Awaitable[T_Retval]], *args: object) -> T_Retval: - ... - - @overload - def call(self, func: Callable[..., T_Retval], *args: object) -> T_Retval: - ... - - def call( - self, func: Callable[..., Awaitable[T_Retval] | T_Retval], *args: object - ) -> T_Retval: - """ - Call the given function in the event loop thread. - - If the callable returns a coroutine object, it is awaited on. - - :param func: any callable - :raises RuntimeError: if the portal is not running or if this method is called from within - the event loop thread - - """ - return cast(T_Retval, self.start_task_soon(func, *args).result()) - - @overload - def spawn_task( - self, - func: Callable[..., Awaitable[T_Retval]], - *args: object, - name: object = None, - ) -> Future[T_Retval]: - ... - - @overload - def spawn_task( - self, func: Callable[..., T_Retval], *args: object, name: object = None - ) -> Future[T_Retval]: - ... - - def spawn_task( - self, - func: Callable[..., Awaitable[T_Retval] | T_Retval], - *args: object, - name: object = None, - ) -> Future[T_Retval]: - """ - Start a task in the portal's task group. - - :param func: the target coroutine function - :param args: positional arguments passed to ``func`` - :param name: name of the task (will be coerced to a string if not ``None``) - :return: a future that resolves with the return value of the callable if the task completes - successfully, or with the exception raised in the task - :raises RuntimeError: if the portal is not running or if this method is called from within - the event loop thread - - .. versionadded:: 2.1 - .. deprecated:: 3.0 - Use :meth:`start_task_soon` instead. If your code needs AnyIO 2 compatibility, you - can keep using this until AnyIO 4. - - """ - warn( - "spawn_task() is deprecated -- use start_task_soon() instead", - DeprecationWarning, - ) - return self.start_task_soon(func, *args, name=name) # type: ignore[arg-type] - - @overload - def start_task_soon( - self, - func: Callable[..., Awaitable[T_Retval]], - *args: object, - name: object = None, - ) -> Future[T_Retval]: - ... - - @overload - def start_task_soon( - self, func: Callable[..., T_Retval], *args: object, name: object = None - ) -> Future[T_Retval]: - ... - - def start_task_soon( - self, - func: Callable[..., Awaitable[T_Retval] | T_Retval], - *args: object, - name: object = None, - ) -> Future[T_Retval]: - """ - Start a task in the portal's task group. - - The task will be run inside a cancel scope which can be cancelled by cancelling the - returned future. - - :param func: the target function - :param args: positional arguments passed to ``func`` - :param name: name of the task (will be coerced to a string if not ``None``) - :return: a future that resolves with the return value of the callable if the task completes - successfully, or with the exception raised in the task - :raises RuntimeError: if the portal is not running or if this method is called from within - the event loop thread - - .. versionadded:: 3.0 - - """ - self._check_running() - f: Future = Future() - self._spawn_task_from_thread(func, args, {}, name, f) - return f - - def start_task( - self, func: Callable[..., Awaitable[Any]], *args: object, name: object = None - ) -> tuple[Future[Any], Any]: - """ - Start a task in the portal's task group and wait until it signals for readiness. - - This method works the same way as :meth:`TaskGroup.start`. - - :param func: the target function - :param args: positional arguments passed to ``func`` - :param name: name of the task (will be coerced to a string if not ``None``) - :return: a tuple of (future, task_status_value) where the ``task_status_value`` is the - value passed to ``task_status.started()`` from within the target function - - .. versionadded:: 3.0 - - """ - - def task_done(future: Future) -> None: - if not task_status_future.done(): - if future.cancelled(): - task_status_future.cancel() - elif future.exception(): - task_status_future.set_exception(future.exception()) - else: - exc = RuntimeError( - "Task exited without calling task_status.started()" - ) - task_status_future.set_exception(exc) - - self._check_running() - task_status_future: Future = Future() - task_status = _BlockingPortalTaskStatus(task_status_future) - f: Future = Future() - f.add_done_callback(task_done) - self._spawn_task_from_thread(func, args, {"task_status": task_status}, name, f) - return f, task_status_future.result() - - def wrap_async_context_manager( - self, cm: AsyncContextManager[T_co] - ) -> ContextManager[T_co]: - """ - Wrap an async context manager as a synchronous context manager via this portal. - - Spawns a task that will call both ``__aenter__()`` and ``__aexit__()``, stopping in the - middle until the synchronous context manager exits. - - :param cm: an asynchronous context manager - :return: a synchronous context manager - - .. versionadded:: 2.1 - - """ - return _BlockingAsyncContextManager(cm, self) - - -def create_blocking_portal() -> BlockingPortal: - """ - Create a portal for running functions in the event loop thread from external threads. - - Use this function in asynchronous code when you need to allow external threads access to the - event loop where your asynchronous code is currently running. - - .. deprecated:: 3.0 - Use :class:`.BlockingPortal` directly. - - """ - warn( - "create_blocking_portal() has been deprecated -- use anyio.from_thread.BlockingPortal() " - "directly", - DeprecationWarning, - ) - return BlockingPortal() - - -@contextmanager -def start_blocking_portal( - backend: str = "asyncio", backend_options: dict[str, Any] | None = None -) -> Generator[BlockingPortal, Any, None]: - """ - Start a new event loop in a new thread and run a blocking portal in its main task. - - The parameters are the same as for :func:`~anyio.run`. - - :param backend: name of the backend - :param backend_options: backend options - :return: a context manager that yields a blocking portal - - .. versionchanged:: 3.0 - Usage as a context manager is now required. - - """ - - async def run_portal() -> None: - async with BlockingPortal() as portal_: - if future.set_running_or_notify_cancel(): - future.set_result(portal_) - await portal_.sleep_until_stopped() - - future: Future[BlockingPortal] = Future() - with ThreadPoolExecutor(1) as executor: - run_future = executor.submit( - _eventloop.run, - run_portal, # type: ignore[arg-type] - backend=backend, - backend_options=backend_options, - ) - try: - wait( - cast(Iterable[Future], [run_future, future]), - return_when=FIRST_COMPLETED, - ) - except BaseException: - future.cancel() - run_future.cancel() - raise - - if future.done(): - portal = future.result() - cancel_remaining_tasks = False - try: - yield portal - except BaseException: - cancel_remaining_tasks = True - raise - finally: - try: - portal.call(portal.stop, cancel_remaining_tasks) - except RuntimeError: - pass - - run_future.result() diff --git a/spaces/lambdalabs/LambdaSuperRes/KAIR/data/dataset_dnpatch.py b/spaces/lambdalabs/LambdaSuperRes/KAIR/data/dataset_dnpatch.py deleted file mode 100644 index 289f92e6f454d8246b5128f9e834de9b1678ee73..0000000000000000000000000000000000000000 --- a/spaces/lambdalabs/LambdaSuperRes/KAIR/data/dataset_dnpatch.py +++ /dev/null @@ -1,133 +0,0 @@ -import random -import numpy as np -import torch -import torch.utils.data as data -import utils.utils_image as util - - -class DatasetDnPatch(data.Dataset): - """ - # ----------------------------------------- - # Get L/H for denosing on AWGN with fixed sigma. - # ****Get all H patches first**** - # Only dataroot_H is needed. - # ----------------------------------------- - # e.g., DnCNN with BSD400 - # ----------------------------------------- - """ - - def __init__(self, opt): - super(DatasetDnPatch, self).__init__() - print('Get L/H for denosing on AWGN with fixed sigma. Only dataroot_H is needed.') - self.opt = opt - self.n_channels = opt['n_channels'] if opt['n_channels'] else 3 - self.patch_size = opt['H_size'] if opt['H_size'] else 64 - - self.sigma = opt['sigma'] if opt['sigma'] else 25 - self.sigma_test = opt['sigma_test'] if opt['sigma_test'] else self.sigma - - self.num_patches_per_image = opt['num_patches_per_image'] if opt['num_patches_per_image'] else 40 - self.num_sampled = opt['num_sampled'] if opt['num_sampled'] else 3000 - - # ------------------------------------ - # get paths of H - # ------------------------------------ - self.paths_H = util.get_image_paths(opt['dataroot_H']) - assert self.paths_H, 'Error: H path is empty.' - - # ------------------------------------ - # number of sampled H images - # ------------------------------------ - self.num_sampled = min(self.num_sampled, len(self.paths_H)) - - # ------------------------------------ - # reserve space with zeros - # ------------------------------------ - self.total_patches = self.num_sampled * self.num_patches_per_image - self.H_data = np.zeros([self.total_patches, self.patch_size, self.patch_size, self.n_channels], dtype=np.uint8) - - # ------------------------------------ - # update H patches - # ------------------------------------ - self.update_data() - - def update_data(self): - """ - # ------------------------------------ - # update whole H patches - # ------------------------------------ - """ - self.index_sampled = random.sample(range(0, len(self.paths_H), 1), self.num_sampled) - n_count = 0 - - for i in range(len(self.index_sampled)): - H_patches = self.get_patches(self.index_sampled[i]) - for H_patch in H_patches: - self.H_data[n_count,:,:,:] = H_patch - n_count += 1 - - print('Training data updated! Total number of patches is: %5.2f X %5.2f = %5.2f\n' % (len(self.H_data)//128, 128, len(self.H_data))) - - def get_patches(self, index): - """ - # ------------------------------------ - # get H patches from an H image - # ------------------------------------ - """ - H_path = self.paths_H[index] - img_H = util.imread_uint(H_path, self.n_channels) # uint format - - H, W = img_H.shape[:2] - - H_patches = [] - - num = self.num_patches_per_image - for _ in range(num): - rnd_h = random.randint(0, max(0, H - self.patch_size)) - rnd_w = random.randint(0, max(0, W - self.patch_size)) - H_patch = img_H[rnd_h:rnd_h + self.patch_size, rnd_w:rnd_w + self.patch_size, :] - H_patches.append(H_patch) - - return H_patches - - def __getitem__(self, index): - - H_path = 'toy.png' - if self.opt['phase'] == 'train': - - patch_H = self.H_data[index] - - # -------------------------------- - # augmentation - flip and/or rotate - # -------------------------------- - mode = random.randint(0, 7) - patch_H = util.augment_img(patch_H, mode=mode) - - patch_H = util.uint2tensor3(patch_H) - patch_L = patch_H.clone() - - # ------------------------------------ - # add noise - # ------------------------------------ - noise = torch.randn(patch_L.size()).mul_(self.sigma/255.0) - patch_L.add_(noise) - - else: - - H_path = self.paths_H[index] - img_H = util.imread_uint(H_path, self.n_channels) - img_H = util.uint2single(img_H) - img_L = np.copy(img_H) - - # ------------------------------------ - # add noise - # ------------------------------------ - np.random.seed(seed=0) - img_L += np.random.normal(0, self.sigma_test/255.0, img_L.shape) - patch_L, patch_H = util.single2tensor3(img_L), util.single2tensor3(img_H) - - L_path = H_path - return {'L': patch_L, 'H': patch_H, 'L_path': L_path, 'H_path': H_path} - - def __len__(self): - return len(self.H_data) diff --git a/spaces/lambdalabs/LambdaSuperRes/KAIR/retinaface/data_faces/wider_face.py b/spaces/lambdalabs/LambdaSuperRes/KAIR/retinaface/data_faces/wider_face.py deleted file mode 100644 index 22f56efdc221bd4162d22884669ba44a3d4de5cd..0000000000000000000000000000000000000000 --- a/spaces/lambdalabs/LambdaSuperRes/KAIR/retinaface/data_faces/wider_face.py +++ /dev/null @@ -1,101 +0,0 @@ -import os -import os.path -import sys -import torch -import torch.utils.data as data -import cv2 -import numpy as np - -class WiderFaceDetection(data.Dataset): - def __init__(self, txt_path, preproc=None): - self.preproc = preproc - self.imgs_path = [] - self.words = [] - f = open(txt_path,'r') - lines = f.readlines() - isFirst = True - labels = [] - for line in lines: - line = line.rstrip() - if line.startswith('#'): - if isFirst is True: - isFirst = False - else: - labels_copy = labels.copy() - self.words.append(labels_copy) - labels.clear() - path = line[2:] - path = txt_path.replace('label.txt','images/') + path - self.imgs_path.append(path) - else: - line = line.split(' ') - label = [float(x) for x in line] - labels.append(label) - - self.words.append(labels) - - def __len__(self): - return len(self.imgs_path) - - def __getitem__(self, index): - img = cv2.imread(self.imgs_path[index]) - height, width, _ = img.shape - - labels = self.words[index] - annotations = np.zeros((0, 15)) - if len(labels) == 0: - return annotations - for idx, label in enumerate(labels): - annotation = np.zeros((1, 15)) - # bbox - annotation[0, 0] = label[0] # x1 - annotation[0, 1] = label[1] # y1 - annotation[0, 2] = label[0] + label[2] # x2 - annotation[0, 3] = label[1] + label[3] # y2 - - # landmarks - annotation[0, 4] = label[4] # l0_x - annotation[0, 5] = label[5] # l0_y - annotation[0, 6] = label[7] # l1_x - annotation[0, 7] = label[8] # l1_y - annotation[0, 8] = label[10] # l2_x - annotation[0, 9] = label[11] # l2_y - annotation[0, 10] = label[13] # l3_x - annotation[0, 11] = label[14] # l3_y - annotation[0, 12] = label[16] # l4_x - annotation[0, 13] = label[17] # l4_y - if (annotation[0, 4]<0): - annotation[0, 14] = -1 - else: - annotation[0, 14] = 1 - - annotations = np.append(annotations, annotation, axis=0) - target = np.array(annotations) - if self.preproc is not None: - img, target = self.preproc(img, target) - - return torch.from_numpy(img), target - -def detection_collate(batch): - """Custom collate fn for dealing with batches of images that have a different - number of associated object annotations (bounding boxes). - - Arguments: - batch: (tuple) A tuple of tensor images and lists of annotations - - Return: - A tuple containing: - 1) (tensor) batch of images stacked on their 0 dim - 2) (list of tensors) annotations for a given image are stacked on 0 dim - """ - targets = [] - imgs = [] - for _, sample in enumerate(batch): - for _, tup in enumerate(sample): - if torch.is_tensor(tup): - imgs.append(tup) - elif isinstance(tup, type(np.empty(0))): - annos = torch.from_numpy(tup).float() - targets.append(annos) - - return (torch.stack(imgs, 0), targets) diff --git a/spaces/lcipolina/Print_Gallery/glide_text2im/tokenizer/__init__.py b/spaces/lcipolina/Print_Gallery/glide_text2im/tokenizer/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/leo-bourrel/test-streamlit/app.py b/spaces/leo-bourrel/test-streamlit/app.py deleted file mode 100644 index c8bfb4c3704925068d12eda82bfe872c1dc4cbfb..0000000000000000000000000000000000000000 --- a/spaces/leo-bourrel/test-streamlit/app.py +++ /dev/null @@ -1,202 +0,0 @@ -import json -import os - -import streamlit as st -import streamlit.components.v1 as components -from langchain.callbacks import get_openai_callback - -from langchain.chains.conversation.memory import ConversationBufferMemory -from langchain.embeddings import GPT4AllEmbeddings -from langchain.llms import OpenAI - -from chat_history import insert_chat_history, insert_chat_history_articles -from connection import connect -from css import load_css -from message import Message -from vector_store import CustomVectorStore -from conversation_retrieval_chain import CustomConversationalRetrievalChain - - -st.set_page_config(layout="wide") - -st.title("Sorbobot - Le futur de la recherche scientifique interactive") - -chat_column, doc_column = st.columns([2, 1]) - -conn = connect() - - -def initialize_session_state(): - if "history" not in st.session_state: - st.session_state.history = [] - if "token_count" not in st.session_state: - st.session_state.token_count = 0 - if "conversation" not in st.session_state: - embeddings = GPT4AllEmbeddings() - - db = CustomVectorStore( - embedding_function=embeddings, - table_name="article", - column_name="abstract_embedding", - connection=conn, - ) - - retriever = db.as_retriever() - - llm = OpenAI( - temperature=0, - openai_api_key=os.environ["OPENAI_API_KEY"], - model="text-davinci-003", - ) - - memory = ConversationBufferMemory( - output_key="answer", memory_key="chat_history", return_messages=True - ) - st.session_state.conversation = CustomConversationalRetrievalChain.from_llm( - llm=llm, - retriever=retriever, - verbose=True, - memory=memory, - return_source_documents=True, - max_tokens_limit=3700, - ) - - -def send_message_callback(): - with st.spinner("Wait for it..."): - with get_openai_callback() as cb: - human_prompt = st.session_state.human_prompt.strip() - if len(human_prompt) == 0: - return - llm_response = st.session_state.conversation(human_prompt) - st.session_state.history.append(Message("human", human_prompt)) - st.session_state.history.append( - Message( - "ai", - llm_response["answer"], - documents=llm_response["source_documents"], - ) - ) - st.session_state.token_count += cb.total_tokens - if os.environ.get("ENVIRONMENT") == "dev": - history_id = insert_chat_history(conn, human_prompt, llm_response["answer"]) - insert_chat_history_articles(conn, history_id, llm_response["source_documents"]) - - -def exemple_message_callback_button(args): - st.session_state.human_prompt = args - send_message_callback() - st.session_state.human_prompt = "" - - -def clear_history(): - st.session_state.history.clear() - st.session_state.token_count = 0 - st.session_state.conversation.memory.clear() - - -load_css() -initialize_session_state() - -exemples = [ - "Who has published influential research on quantum computing?", - "List any prominent authors in the field of artificial intelligence ethics?", - "Who are the leading experts on climate change mitigation strategies?", -] - -with chat_column: - chat_placeholder = st.container() - prompt_placeholder = st.form("chat-form", clear_on_submit=True) - information_placeholder = st.container() - - with chat_placeholder: - for chat in st.session_state.history: - div = f""" -
            - -
            - ​{chat.message} -
            -
            - """ - st.markdown(div, unsafe_allow_html=True) - - for _ in range(3): - st.markdown("") - - with prompt_placeholder: - st.markdown("**Chat**") - cols = st.columns((6, 1)) - cols[0].text_input( - "Chat", - label_visibility="collapsed", - key="human_prompt", - ) - cols[1].form_submit_button( - "Submit", - type="primary", - on_click=send_message_callback, - ) - - if st.session_state.token_count == 0: - information_placeholder.markdown("### Test me !") - for idx_exemple, exemple in enumerate(exemples): - information_placeholder.button( - exemple, - key=f"{idx_exemple}_button", - on_click=exemple_message_callback_button, - args=(exemple,) - ) - - st.button(":new: Start a new conversation", on_click=clear_history, type="secondary") - - information_placeholder.caption( - f""" - Used {st.session_state.token_count} tokens \n - Debug Langchain conversation: - {st.session_state.history} - """ - ) - - components.html( - """ - - """, - height=0, - width=0, - ) - -with doc_column: - st.markdown("**Source documents**") - if len(st.session_state.history) > 0: - for doc in st.session_state.history[-1].documents: - doc_content = json.loads(doc.page_content) - - expander = st.expander(doc_content["title"]) - expander.markdown(f"**HalID** : https://hal.science/{doc_content['hal_id']}") - expander.markdown(doc_content["abstract"]) - expander.markdown(f"**Authors** : {doc_content['authors']}") - expander.markdown(f"**Keywords** : {doc_content['keywords']}") - expander.markdown(f"**Distance** : {doc_content['distance']}") diff --git a/spaces/leogabraneth/text-generation-webui-main/extensions/api/script.py b/spaces/leogabraneth/text-generation-webui-main/extensions/api/script.py deleted file mode 100644 index 12fd9cad3f4c5ab425b63c253e70e520f961f844..0000000000000000000000000000000000000000 --- a/spaces/leogabraneth/text-generation-webui-main/extensions/api/script.py +++ /dev/null @@ -1,13 +0,0 @@ -import time - -import extensions.api.blocking_api as blocking_api -import extensions.api.streaming_api as streaming_api -from modules import shared - - -def setup(): - blocking_api.start_server(shared.args.api_blocking_port, share=shared.args.public_api, tunnel_id=shared.args.public_api_id) - if shared.args.public_api: - time.sleep(5) - - streaming_api.start_server(shared.args.api_streaming_port, share=shared.args.public_api, tunnel_id=shared.args.public_api_id) diff --git a/spaces/leogabraneth/text-generation-webui-main/modules/exllama.py b/spaces/leogabraneth/text-generation-webui-main/modules/exllama.py deleted file mode 100644 index 4257ee0765f200f1a57acedde90e13d4a422abc3..0000000000000000000000000000000000000000 --- a/spaces/leogabraneth/text-generation-webui-main/modules/exllama.py +++ /dev/null @@ -1,220 +0,0 @@ -from pathlib import Path - -import torch -import torch.nn.functional as F -from torch import version as torch_version - -from modules import shared -from modules.logging_colors import logger -from modules.models import clear_torch_cache -from modules.text_generation import get_max_prompt_length - -try: - from exllama.generator import ExLlamaGenerator - from exllama.model import ExLlama, ExLlamaCache, ExLlamaConfig - from exllama.tokenizer import ExLlamaTokenizer -except: - logger.warning('exllama module failed to import. Will attempt to import from repositories/.') - try: - from modules.relative_imports import RelativeImport - - with RelativeImport("repositories/exllama"): - from generator import ExLlamaGenerator - from model import ExLlama, ExLlamaCache, ExLlamaConfig - from tokenizer import ExLlamaTokenizer - except: - logger.error( - "Could not find repositories/exllama. Please ensure that exllama" - " (https://github.com/turboderp/exllama) is cloned inside repositories/ and is up to date." - ) - raise - - -class ExllamaModel: - def __init__(self): - pass - - @classmethod - def from_pretrained(self, path_to_model): - - path_to_model = Path(f'{shared.args.model_dir}') / Path(path_to_model) - tokenizer_model_path = path_to_model / "tokenizer.model" - model_config_path = path_to_model / "config.json" - - # Find the model checkpoint - model_path = None - for ext in ['.safetensors', '.pt', '.bin']: - found = list(path_to_model.glob(f"*{ext}")) - if len(found) > 0: - if len(found) > 1: - logger.warning(f'More than one {ext} model has been found. The last one will be selected. It could be wrong.') - - model_path = found[-1] - break - - config = ExLlamaConfig(str(model_config_path)) - config.model_path = str(model_path) - config.max_seq_len = shared.args.max_seq_len - config.compress_pos_emb = shared.args.compress_pos_emb - if shared.args.gpu_split: - config.set_auto_map(shared.args.gpu_split) - config.gpu_peer_fix = True - - if shared.args.alpha_value > 1 and shared.args.rope_freq_base == 0: - config.alpha_value = shared.args.alpha_value - config.calculate_rotary_embedding_base() - elif shared.args.rope_freq_base > 0: - config.rotary_embedding_base = shared.args.rope_freq_base - - if torch_version.hip: - config.rmsnorm_no_half2 = True - config.rope_no_half2 = True - config.matmul_no_half2 = True - config.silu_no_half2 = True - - model = ExLlama(config) - tokenizer = ExLlamaTokenizer(str(tokenizer_model_path)) - cache = ExLlamaCache(model) - generator = ExLlamaGenerator(model, tokenizer, cache) - - result = self() - result.config = config - result.model = model - result.cache = cache - result.tokenizer = tokenizer - result.generator = generator - return result, result - - def encode(self, string, **kwargs): - return self.tokenizer.encode(string, max_seq_len=self.model.config.max_seq_len, add_bos=True) - - def decode(self, ids, **kwargs): - if isinstance(ids, list): - ids = torch.tensor([ids]) - elif isinstance(ids, torch.Tensor) and ids.numel() == 1: - ids = ids.view(1, -1) - - return self.tokenizer.decode(ids)[0] - - def get_logits(self, token_ids, **kwargs): - self.cache.current_seq_len = 0 - if token_ids.shape[-1] > 1: - self.model.forward(token_ids[:, :-1], self.cache, input_mask=None, preprocess_only=True) - - return self.model.forward(token_ids[:, -1:], self.cache, **kwargs).float().cpu() - - def generate_with_streaming(self, prompt, state): - - # The cache batch size must be 2 for CFG and 1 otherwise - if state['guidance_scale'] == 1: - if self.cache.batch_size == 2: - del self.cache - clear_torch_cache() - self.cache = ExLlamaCache(self.model) - self.generator = ExLlamaGenerator(self.model, self.tokenizer, self.cache) - else: - if self.cache.batch_size == 1: - del self.cache - clear_torch_cache() - self.cache = ExLlamaCache(self.model, batch_size=2) - self.generator = ExLlamaGenerator(self.model, self.tokenizer, self.cache) - - self.generator.settings.temperature = state['temperature'] - self.generator.settings.top_p = state['top_p'] - self.generator.settings.top_k = state['top_k'] - self.generator.settings.typical = state['typical_p'] - self.generator.settings.token_repetition_penalty_max = state['repetition_penalty'] - self.generator.settings.token_repetition_penalty_sustain = -1 if state['repetition_penalty_range'] <= 0 else state['repetition_penalty_range'] - if state['ban_eos_token']: - self.generator.disallow_tokens([self.tokenizer.eos_token_id]) - else: - self.generator.disallow_tokens(None) - - if state['custom_token_bans']: - to_ban = [int(x) for x in state['custom_token_bans'].split(',')] - if len(to_ban) > 0: - self.generator.disallow_tokens(to_ban) - - # Case 1: no CFG - if state['guidance_scale'] == 1: - self.generator.end_beam_search() - - # Tokenizing the input - ids = self.generator.tokenizer.encode(prompt, max_seq_len=self.model.config.max_seq_len) - if state['add_bos_token']: - ids = torch.cat( - [torch.tensor([[self.tokenizer.bos_token_id]]).to(ids.device), - ids], dim=1 - ).to(torch.int64) - ids = ids[:, -get_max_prompt_length(state):] - if state['auto_max_new_tokens']: - max_new_tokens = state['truncation_length'] - ids.shape[-1] - else: - max_new_tokens = state['max_new_tokens'] - - self.generator.gen_begin_reuse(ids) - initial_len = self.generator.sequence[0].shape[0] - has_leading_space = False - - for i in range(max_new_tokens): - token = self.generator.gen_single_token() - if i == 0 and self.generator.tokenizer.tokenizer.IdToPiece(int(token)).startswith('▁'): - has_leading_space = True - - decoded_text = self.generator.tokenizer.decode(self.generator.sequence[0][initial_len:]) - if has_leading_space: - decoded_text = ' ' + decoded_text - - yield decoded_text - if token.item() == self.generator.tokenizer.eos_token_id or shared.stop_everything: - break - - # Case 2: CFG - # Copied from https://github.com/turboderp/exllama/blob/master/example_cfg.py - else: - alpha = state['guidance_scale'] - prompts = [prompt, state['negative_prompt'] or ''] - - ids, mask = self.tokenizer.encode( - prompts, - return_mask=True, - max_seq_len=self.model.config.max_seq_len, - add_bos=state['add_bos_token'] - ) - if state['auto_max_new_tokens']: - max_new_tokens = state['truncation_length'] - ids[0].shape[-1] - else: - max_new_tokens = state['max_new_tokens'] - - self.generator.gen_begin(ids, mask=mask) - initial_len = self.generator.sequence[0].shape[0] - has_leading_space = False - - for i in range(max_new_tokens): - logits = self.model.forward(self.generator.sequence[:, -1:], self.cache, input_mask=mask) - self.generator.apply_rep_penalty(logits) - - logits = F.log_softmax(logits, dim=-1) - logits_mixed = alpha * logits[0] + (1 - alpha) * logits[1] - - token, _ = self.generator.sample_current(logits_mixed) - if i == 0 and self.generator.tokenizer.tokenizer.IdToPiece(int(token)).startswith('▁'): - has_leading_space = True - - decoded_text = self.generator.tokenizer.decode(self.generator.sequence[0][initial_len:]) - if has_leading_space: - decoded_text = ' ' + decoded_text - - yield decoded_text - if token.item() == self.tokenizer.eos_token_id or shared.stop_everything: - break - - batch_token = token.repeat(2, 1) - self.generator.gen_accept_token(batch_token) - - def generate(self, prompt, state): - output = '' - for output in self.generate_with_streaming(prompt, state): - pass - - return output diff --git a/spaces/levandong/MNIST-detect-deploy-webapp/app.py b/spaces/levandong/MNIST-detect-deploy-webapp/app.py deleted file mode 100644 index b0d6ea69fd866a0a9dc8bd075ff8110871816d70..0000000000000000000000000000000000000000 --- a/spaces/levandong/MNIST-detect-deploy-webapp/app.py +++ /dev/null @@ -1,15 +0,0 @@ -from keras.models import load_model -import gradio as gr -import numpy as np -model = load_model('./data/mnist_model.h5') - -def predict_image(img): - print(img) - img_3d=img.reshape(-1,28,28) - im_resize=img_3d/255.0 - prediction=model.predict(im_resize) - pred=np.argmax(prediction) - return pred - -iface = gr.Interface(predict_image, inputs="sketchpad", outputs="label") -iface.launch(debug='True') \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Agatha Christies Marple S06E02 720p HDTV X264TLA.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Agatha Christies Marple S06E02 720p HDTV X264TLA.md deleted file mode 100644 index 9c4d17e62ab906be801f4c815f131b2342898fc4..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Agatha Christies Marple S06E02 720p HDTV X264TLA.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Agatha Christies Marple S06E02 720p HDTV X264TLA


            Download Ziphttps://bytlly.com/2uGydl



            -
            -Agatha Christie Marple - the sixth season with Arabic subtitles. Marple Agatha Christie. Release Name: agatha.christies.marple.s06e02.720p.hdtv.x264-tla . torrent Release format: mkv Translation: Professional (full duplication), Russian subtitles are used as dubbing in some places. All episodes are voiced in Russian. Duration: series to ~ 00:45:00 Release year: 2013 Genre: Detective Director: Rob Thomas, Stephen Williams, Peter Markle Cast: Olivia Colman, Judi Dench, Helena Bonham Carter, Jonny Lee Miller, Paul Jha 8a78ff9644
            -
            -
            -

            diff --git a/spaces/lj1995/vocal2guitar/uvr5_pack/lib_v5/nets_123821KB.py b/spaces/lj1995/vocal2guitar/uvr5_pack/lib_v5/nets_123821KB.py deleted file mode 100644 index ea6c45c968d66c75e577e8a0fcca9bf800eb4ed6..0000000000000000000000000000000000000000 --- a/spaces/lj1995/vocal2guitar/uvr5_pack/lib_v5/nets_123821KB.py +++ /dev/null @@ -1,122 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from uvr5_pack.lib_v5 import layers_123821KB as layers - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 32) - self.stg1_high_band_net = BaseASPPNet(2, 32) - - self.stg2_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(16, 32) - - self.stg3_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(32, 64) - - self.out = nn.Conv2d(64, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(32, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(32, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/ltgoslo/ssa-perin/mtool/analyzer.py b/spaces/ltgoslo/ssa-perin/mtool/analyzer.py deleted file mode 100644 index 37cf66775629617bd104c4bdc50c9467576e023b..0000000000000000000000000000000000000000 --- a/spaces/ltgoslo/ssa-perin/mtool/analyzer.py +++ /dev/null @@ -1,355 +0,0 @@ -# GraphaLogue Analyzer -# Marco Kuhlmann - -import itertools -import statistics -import sys - -from graph import Graph -from treewidth import quickbb - - -class DepthFirstSearch(object): - - def __init__(self, graph, undirected=False): - self._graph = graph - self._undirected = undirected - - self._enter = dict() - self._leave = dict() - self.n_runs = 0 - - def compute_timestamps(node, timestamp): - self._enter[node] = next(timestamp) - for edge in self._graph.find_node(node).outgoing_edges: - if not edge.tgt in self._enter: - compute_timestamps(edge.tgt, timestamp) - if self._undirected: - for edge in self._graph.find_node(node).incoming_edges: - if not edge.src in self._enter: - compute_timestamps(edge.src, timestamp) - self._leave[node] = next(timestamp) - timestamp = itertools.count() - for node in self._graph.nodes: - if not node.id in self._enter: - compute_timestamps(node.id, timestamp) - self.n_runs += 1 - - def is_back_edge(self, edge): - return \ - self._enter[edge.tgt] < self._enter[edge.src] and \ - self._leave[edge.src] < self._leave[edge.tgt] - - -class InspectedGraph(object): - - def __init__(self, graph): - self.graph = graph - self.n_nodes = len(graph.nodes) - self.dfs = DepthFirstSearch(graph) - self.undirected_dfs = DepthFirstSearch(graph, undirected=True) - - def n_root_nodes(self): - return sum(1 for node in self.graph.nodes if node.is_root()) - - def n_leaf_nodes(self): - return sum(1 for node in self.graph.nodes if node.is_leaf()) - - def n_top_nodes(self): - return sum(1 for node in self.graph.nodes if node.is_top()) - - def n_singleton_nodes(self): - return sum(1 for node in self.graph.nodes if node.is_singleton()) - - def n_loops(self): - return sum(1 for edge in self.graph.edges if edge.is_loop()) - - def n_components(self): - return self.undirected_dfs.n_runs - self.n_singleton_nodes() - - def is_cyclic(self): - for edge in self.graph.edges: - if edge.is_loop() or self.dfs.is_back_edge(edge): - return True - return False - - def is_forest(self): - if self.is_cyclic(): - return False - else: - for node in self.graph.nodes: - if len(node.incoming_edges) > 1: - return False - return True - - def is_tree(self): - return self.is_forest() and self.n_components() == 1 - - def treewidth(self): - n_nodes = len(self.graph.nodes) - self.n_singleton_nodes() - if n_nodes <= 1: - return 1 - else: - undirected_graph = {} - for node in self.graph.nodes: - if not node.is_singleton(): - undirected_graph[node.id] = set() - for edge in self.graph.edges: - if not edge.is_loop(): - undirected_graph[edge.src].add(edge.tgt) - undirected_graph[edge.tgt].add(edge.src) - decomposition = quickbb(undirected_graph) - return max(1, max(len(u)-1 for u in decomposition)) - - def _crossing_pairs(self): - def endpoints(edge): - return (min(edge.src, edge.tgt), max(edge.src, edge.tgt)) - for edge1 in self.graph.edges: - min1, max1 = endpoints(edge1) - for edge2 in self.graph.edges: - min2, max2 = endpoints(edge2) - if min1 < min2 and min2 < max1 and max1 < max2: - yield (min1, max1), (min2, max2) - - def _crossing_edges(self): - crossing_edges = set() - for edge1, edge2 in self._crossing_pairs(): - crossing_edges.add(edge1) - crossing_edges.add(edge2) - return crossing_edges - - def is_noncrossing(self): - for _, _ in self._crossing_pairs(): - return False - return True - - def is_page2(self): - crossing_graph = {u: set() for u in self._crossing_edges()} - for edge1, edge2 in self._crossing_pairs(): - crossing_graph[edge1].add(edge2) - crossing_graph[edge2].add(edge1) - - # Tests whether the specified undirected graph is 2-colorable. - colors = {} - - def inner(node, color1, color2): - colors[node] = color1 - for neighbour in crossing_graph[node]: - if neighbour in colors: - if colors[neighbour] == color1: - return False - else: - inner(neighbour, color2, color1) - return True - - for node in crossing_graph: - if node not in colors: - if not inner(node, 0, 1): - return False - return True - - def density(self): - n_nodes = len(self.graph.nodes) - self.n_singleton_nodes() - if n_nodes <= 1: - return 1 - else: - n_edges = 0 - for edge in self.graph.edges: - if edge.src != edge.tgt: - n_edges += 1 - return n_edges / (n_nodes - 1) - - -PROPERTY_COUNTER = itertools.count(1) - - -def report(msg, val): - print("(%02d)\t%s\t%s" % (next(PROPERTY_COUNTER), msg, val)) - - -def analyze(graphs, ids=None): - ordered = False - n_graphs = 0 - n_graphs_noncrossing = 0 - n_graphs_has_top_node = 0 - n_graphs_multirooted = 0 - n_nodes = 0 - n_nodes_with_reentrancies = 0 - n_singletons = 0 - n_top_nodes = 0 - n_edges = 0 - n_labels = 0; - n_properties = 0; - n_anchors = 0; - n_attributes = 0; - n_loops = 0 - labels = set() - non_functional_labels = set() - n_cyclic = 0 - n_connected = 0 - n_forests = 0 - n_trees = 0 - n_graphs_page2 = 0 - acc_treewidth = 0 - n_roots_nontop = 0 - acc_density = 0.0 - max_treewidth = 0 - acc_edge_length = 0 - n_treewidth_one = 0 - treewidths = [] - for graph in graphs: - if ids and not graph.id in ids: - continue - - n_graphs += 1 - n_nodes += len(graph.nodes) - n_edges += len(graph.edges) - - for node in graph.nodes: - if node.label is not None: n_labels += 1; - if node.properties is not None and node.values is not None: - n_properties += len(node.properties); - if node.anchors is not None: n_anchors += 1; - for edge in graph.edges: - if edge.attributes is not None and edge.values is not None: - n_attributes += len(edge.attributes); - - inspected_graph = InspectedGraph(graph) - - treewidth = inspected_graph.treewidth() - - n_trees += inspected_graph.is_tree() - acc_density += inspected_graph.density() - - has_reentrancies = False - has_top_node = False - - n_loops += inspected_graph.n_loops() - - for edge in graph.edges: - if edge.lab is not None: labels.add(edge.lab) - for node in graph.nodes: - n_top_nodes += node.is_top - if node.is_top: - has_top_node = True - n_singletons += node.is_singleton() - if len(node.incoming_edges) > 1: - n_nodes_with_reentrancies += 1 - has_reentrancies = True - outgoing_labels = set() - for edge in node.outgoing_edges: - if edge.lab in outgoing_labels: - non_functional_labels.add(edge.lab) - else: - outgoing_labels.add(edge.lab) - if not node.is_singleton() and node.is_root() and not node.is_top: - n_roots_nontop += 1 - - n_cyclic += inspected_graph.is_cyclic() - n_connected += inspected_graph.n_components() == 1 - n_forests += inspected_graph.is_forest() - acc_treewidth += treewidth - max_treewidth = max(max_treewidth, treewidth) - n_treewidth_one += treewidth == 1 - treewidths.append(treewidth) - - if graph.flavor == 0: - ordered = True - n_graphs_noncrossing += inspected_graph.is_noncrossing() - n_graphs_page2 += inspected_graph.is_page2() - acc_edge_length += sum(edge.length() for edge in graph.edges) - else: - if ordered: - print( - "analyzer.py: cannot mix graphs of different flavors in one file; exit.", file=sys.stderr) - sys.exit(1) - - n_graphs_has_top_node += has_top_node - n_graphs_multirooted += inspected_graph.n_root_nodes() > 1 - - n_nonsingletons = n_nodes - n_singletons - - report("number of graphs", "%d" % n_graphs) - report("number of nodes", "%d" % n_nodes) - n_tuples = n_top_nodes + n_labels + n_properties + n_anchors + n_edges + n_attributes; - if n_tuples > 0: - report("number of tops (percentage)", - "{:d} ({:.2f})".format(n_top_nodes, 100 * n_top_nodes / n_tuples)); - report("number of node labels (percentage)", - "{:d} ({:.2f})".format(n_labels, 100 * n_labels / n_tuples)); - report("number of node properties (percentage)", - "{:d} ({:.2f})".format(n_properties, 100 * n_properties / n_tuples)); - report("number of node anchors (percentage)", - "{:d} ({:.2f})".format(n_anchors, 100 * n_anchors / n_tuples)); - report("number of edges (percentage)", - "{:d} ({:.2f})".format(n_edges, 100 * n_edges / n_tuples)); - report("number of edge attributes (percentage)", - "{:d} ({:.2f})".format(n_attributes, 100 * n_attributes / n_tuples)); - report("number of edge labels", "%d" % len(labels)) -# report("\\percentnode\\ singleton", "%.2f" % (100 * n_singletons / n_nodes)) -# report("\\percentnode\\ non-singleton", "%.2f" % (100 * n_nonsingletons / n_nodes)) - report("\\percentgraph\\ trees", "%.2f" % (100 * n_trees / n_graphs)) - report("\\percentgraph\\ treewidth one", "%.2f" % - (100 * n_treewidth_one / n_graphs)) - report("average treewidth", "%.3f" % (acc_treewidth / n_graphs)) -# report("median treewidth", "%d" % statistics.median(treewidths)) - report("maximal treewidth", "%d" % max_treewidth) -# report("edge density", "%.3f" % (n_edges / n_nonsingletons)) - report("average edge density", "%.3f" % (acc_density / n_graphs)) - report("\\percentnode\\ reentrant", "%.2f" % - (100 * n_nodes_with_reentrancies / n_nonsingletons)) -# report("labels", " ".join(sorted(labels))) -# report("functional labels", " ".join(sorted(labels - non_functional_labels))) -# report("non-functional labels", " ".join(sorted(non_functional_labels))) -# report("\\percentgraph\\ forests", "%.2f" % (100 * n_forests / n_graphs)) -# report("number of top nodes", "%d" % n_top_nodes) - report("\\percentgraph\\ cyclic", "%.2f" % (100 * n_cyclic / n_graphs)) -# report("number of self-loops", "%d" % n_loops) - report("\\percentgraph\\ not connected", "%.2f" % - (100 * (n_graphs - n_connected) / n_graphs)) -# report("\\percentgraph\\ without top", "%.2f" % (100 * (n_graphs - n_graphs_has_top_node) / n_graphs)) -# report("average top nodes per graph", "%.3f" % (n_top_nodes / n_graphs)) - report("\\percentgraph\\ multi-rooted", "%.2f" % - (100 * n_graphs_multirooted / n_graphs)) - report("percentage of non-top roots", "%.2f" % - (100 * n_roots_nontop / n_nonsingletons)) - if ordered: - report("average edge length", "%.3f" % (acc_edge_length / n_edges)) - report("\\percentgraph\\ noncrossing", "%.2f" % - (100 * n_graphs_noncrossing / n_graphs)) - report("\\percentgraph\\ pagenumber two", "%.2f" % - (100 * n_graphs_page2 / n_graphs)) - else: - report("average edge length", "--") - report("\\percentgraph\\ noncrossing", "--") - report("\\percentgraph\\ pagenumber two", "--") - - -def read_ids(file_name): - ids = set() - with open(file_name) as fp: - for line in fp: - ids.add(line.rstrip()) - return ids - - -def read_tokens(file_name): - with open(file_name) as fp: - for line in fp: - yield line.split() - - -def analyze_cmd(read_function, ordered=False): - import sys - ids = None - tokens = None - for arg in sys.argv[2:]: - x, y = tuple(arg.split(':')) - if x == 'ids': - print("Reading whitelisted IDs from %s" % y, file=sys.stderr) - ids = read_ids(y) - if x == 'tokens': - print("Reading tokens from %s" % y, file=sys.stderr) - tokens = read_tokens(y) - with open(sys.argv[1]) as fp: - analyze(read_function(fp), ordered=ordered, ids=ids, tokens=tokens) diff --git a/spaces/lvwerra/bary_score/README.md b/spaces/lvwerra/bary_score/README.md deleted file mode 100644 index f9cb32e17ea16f9476684d14ab8aaed06ebf232c..0000000000000000000000000000000000000000 --- a/spaces/lvwerra/bary_score/README.md +++ /dev/null @@ -1,50 +0,0 @@ ---- -title: Bary Score -datasets: -- -tags: -- evaluate -- metric -description: "TODO: add a description here" -sdk: gradio -sdk_version: 3.0.2 -app_file: app.py -pinned: false ---- - -# Metric Card for Bary Score - -***Module Card Instructions:*** *Fill out the following subsections. Feel free to take a look at existing metric cards if you'd like examples.* - -## Metric Description -*Give a brief overview of this metric, including what task(s) it is usually used for, if any.* - -## How to Use -*Give general statement of how to use the metric* - -*Provide simplest possible example for using the metric* - -### Inputs -*List all input arguments in the format below* -- **input_field** *(type): Definition of input, with explanation if necessary. State any default value(s).* - -### Output Values - -*Explain what this metric outputs and provide an example of what the metric output looks like. Modules should return a dictionary with one or multiple key-value pairs, e.g. {"bleu" : 6.02}* - -*State the range of possible values that the metric's output can take, as well as what in that range is considered good. For example: "This metric can take on any value between 0 and 100, inclusive. Higher scores are better."* - -#### Values from Popular Papers -*Give examples, preferrably with links to leaderboards or publications, to papers that have reported this metric, along with the values they have reported.* - -### Examples -*Give code examples of the metric being used. Try to include examples that clear up any potential ambiguity left from the metric description above. If possible, provide a range of examples that show both typical and atypical results, as well as examples where a variety of input parameters are passed.* - -## Limitations and Bias -*Note any known limitations or biases that the metric has, with links and references if possible.* - -## Citation -*Cite the source where this metric was introduced.* - -## Further References -*Add any useful further references.* diff --git a/spaces/lxe/lora-cerebras-gpt2.7b-alpaca-shortprompt/README.md b/spaces/lxe/lora-cerebras-gpt2.7b-alpaca-shortprompt/README.md deleted file mode 100644 index e8f1bd3ade3824262ae161215dd95f68985d350a..0000000000000000000000000000000000000000 --- a/spaces/lxe/lora-cerebras-gpt2.7b-alpaca-shortprompt/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Lora Cerebras Gpt2.7b Alpaca Shortprompt -emoji: 🐨 -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ma-xu/LIVE/thrust/thrust/cmake/README.md b/spaces/ma-xu/LIVE/thrust/thrust/cmake/README.md deleted file mode 100644 index c032411d043aafc5dff3da8eeb15a42ce796f7f3..0000000000000000000000000000000000000000 --- a/spaces/ma-xu/LIVE/thrust/thrust/cmake/README.md +++ /dev/null @@ -1,215 +0,0 @@ -# Using Thrust with CMake - -Thrust provides configuration files that simplify using Thrust -from other CMake projects. Requirements: - -- Thrust >= 1.9.10 -- CMake >= 3.15 - -See the [Fixing Legacy FindThrust.cmake](#fixing-legacy-findthrustcmake) -section for solutions that work on older Thrust versions. - -## User Guide - -#### Default Configuration (CUDA) - -Thrust is configured using a `thrust_create_target` CMake function that -assembles a complete interface to the Thrust library: - -```cmake -find_package(Thrust REQUIRED CONFIG) -thrust_create_target(Thrust) -target_link_libraries(MyProgram Thrust) -``` - -The first argument is the name of the interface target to create, and any -additional options will be used to configure the target. By default, -`thrust_create_target` will configure its result to use CUDA acceleration. - -If desired, `thrust_create_target` may be called multiple times to build -several unique Thrust interface targets with different configurations, as -detailed below. - -**Note:** If CMake is unable to locate Thrust, specify the path to Thrust's CMake -configuration directory (where this README file is located) as `Thrust_DIR`, -e.g.: - -``` -$ cmake . -DThrust_DIR=/usr/local/cuda/include/thrust/cmake/ -``` - -#### TBB / OpenMP - -To explicitly specify host/device systems, `HOST` and `DEVICE` arguments can be -passed to `thrust_create_target`. If an explicit system is not specified, the -target will default to using CPP for host and/or CUDA for device. - -```cmake -thrust_create_target(ThrustTBB DEVICE TBB) -thrust_create_target(ThrustOMP HOST CPP DEVICE OMP) -``` - -will create targets `ThrustTBB` and `ThrustOMP`. Both will use the serial `CPP` -host system, but will find and use TBB or OpenMP for the device system. - -#### Configure Target from Cache Options - -To allow a Thrust target to be configurable easily via `cmake-gui` or -`ccmake`, pass the `FROM_OPTIONS` flag to `thrust_create_target`. This will add -`THRUST_HOST_SYSTEM` and `THRUST_DEVICE_SYSTEM` options to the CMake cache that -allow selection from the systems supported by this version of Thrust. - -```cmake -thrust_create_target(Thrust FROM_OPTIONS - [HOST_OPTION