`` tags. By default, the content is enclosed in a ```` tag, itself wrapped in a ```` tag (but see the `nowrap` option). The ``
``'s CSS class can be set by the `cssclass` option."),
- 'IRCFormatter': ('pygments.formatters.irc', 'IRC', ('irc', 'IRC'), (), 'Format tokens with IRC color sequences'),
- 'ImageFormatter': ('pygments.formatters.img', 'img', ('img', 'IMG', 'png'), ('*.png',), 'Create a PNG image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'),
- 'JpgImageFormatter': ('pygments.formatters.img', 'img_jpg', ('jpg', 'jpeg'), ('*.jpg',), 'Create a JPEG image from source code. This uses the Python Imaging Library to generate a pixmap from the source code.'),
- 'LatexFormatter': ('pygments.formatters.latex', 'LaTeX', ('latex', 'tex'), ('*.tex',), 'Format tokens as LaTeX code. This needs the `fancyvrb` and `color` standard packages.'),
- 'NullFormatter': ('pygments.formatters.other', 'Text only', ('text', 'null'), ('*.txt',), 'Output the text unchanged without any formatting.'),
- 'PangoMarkupFormatter': ('pygments.formatters.pangomarkup', 'Pango Markup', ('pango', 'pangomarkup'), (), 'Format tokens as Pango Markup code. It can then be rendered to an SVG.'),
- 'RawTokenFormatter': ('pygments.formatters.other', 'Raw tokens', ('raw', 'tokens'), ('*.raw',), 'Format tokens as a raw representation for storing token streams.'),
- 'RtfFormatter': ('pygments.formatters.rtf', 'RTF', ('rtf',), ('*.rtf',), 'Format tokens as RTF markup. This formatter automatically outputs full RTF documents with color information and other useful stuff. Perfect for Copy and Paste into Microsoft(R) Word(R) documents.'),
- 'SvgFormatter': ('pygments.formatters.svg', 'SVG', ('svg',), ('*.svg',), 'Format tokens as an SVG graphics file. This formatter is still experimental. Each line of code is a ``
`` element with explicit ``x`` and ``y`` coordinates containing ```` elements with the individual token styles.'),
- 'Terminal256Formatter': ('pygments.formatters.terminal256', 'Terminal256', ('terminal256', 'console256', '256'), (), 'Format tokens with ANSI color sequences, for output in a 256-color terminal or console. Like in `TerminalFormatter` color sequences are terminated at newlines, so that paging the output works correctly.'),
- 'TerminalFormatter': ('pygments.formatters.terminal', 'Terminal', ('terminal', 'console'), (), 'Format tokens with ANSI color sequences, for output in a text console. Color sequences are terminated at newlines, so that paging the output works correctly.'),
- 'TerminalTrueColorFormatter': ('pygments.formatters.terminal256', 'TerminalTrueColor', ('terminal16m', 'console16m', '16m'), (), 'Format tokens with ANSI color sequences, for output in a true-color terminal or console. Like in `TerminalFormatter` color sequences are terminated at newlines, so that paging the output works correctly.'),
- 'TestcaseFormatter': ('pygments.formatters.other', 'Testcase', ('testcase',), (), 'Format tokens as appropriate for a new testcase.'),
-}
diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pyparsing/__init__.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pyparsing/__init__.py
deleted file mode 100644
index 88bc10ac18a6af79f962fec16091d3494adc9e66..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pyparsing/__init__.py
+++ /dev/null
@@ -1,322 +0,0 @@
-# module pyparsing.py
-#
-# Copyright (c) 2003-2022 Paul T. McGuire
-#
-# Permission is hereby granted, free of charge, to any person obtaining
-# a copy of this software and associated documentation files (the
-# "Software"), to deal in the Software without restriction, including
-# without limitation the rights to use, copy, modify, merge, publish,
-# distribute, sublicense, and/or sell copies of the Software, and to
-# permit persons to whom the Software is furnished to do so, subject to
-# the following conditions:
-#
-# The above copyright notice and this permission notice shall be
-# included in all copies or substantial portions of the Software.
-#
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
-# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
-# CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
-# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
-# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
-#
-
-__doc__ = """
-pyparsing module - Classes and methods to define and execute parsing grammars
-=============================================================================
-
-The pyparsing module is an alternative approach to creating and
-executing simple grammars, vs. the traditional lex/yacc approach, or the
-use of regular expressions. With pyparsing, you don't need to learn
-a new syntax for defining grammars or matching expressions - the parsing
-module provides a library of classes that you use to construct the
-grammar directly in Python.
-
-Here is a program to parse "Hello, World!" (or any greeting of the form
-``", !"``), built up using :class:`Word`,
-:class:`Literal`, and :class:`And` elements
-(the :meth:`'+'` operators create :class:`And` expressions,
-and the strings are auto-converted to :class:`Literal` expressions)::
-
- from pip._vendor.pyparsing import Word, alphas
-
- # define grammar of a greeting
- greet = Word(alphas) + "," + Word(alphas) + "!"
-
- hello = "Hello, World!"
- print(hello, "->", greet.parse_string(hello))
-
-The program outputs the following::
-
- Hello, World! -> ['Hello', ',', 'World', '!']
-
-The Python representation of the grammar is quite readable, owing to the
-self-explanatory class names, and the use of :class:`'+'`,
-:class:`'|'`, :class:`'^'` and :class:`'&'` operators.
-
-The :class:`ParseResults` object returned from
-:class:`ParserElement.parse_string` can be
-accessed as a nested list, a dictionary, or an object with named
-attributes.
-
-The pyparsing module handles some of the problems that are typically
-vexing when writing text parsers:
-
- - extra or missing whitespace (the above program will also handle
- "Hello,World!", "Hello , World !", etc.)
- - quoted strings
- - embedded comments
-
-
-Getting Started -
------------------
-Visit the classes :class:`ParserElement` and :class:`ParseResults` to
-see the base classes that most other pyparsing
-classes inherit from. Use the docstrings for examples of how to:
-
- - construct literal match expressions from :class:`Literal` and
- :class:`CaselessLiteral` classes
- - construct character word-group expressions using the :class:`Word`
- class
- - see how to create repetitive expressions using :class:`ZeroOrMore`
- and :class:`OneOrMore` classes
- - use :class:`'+'`, :class:`'|'`, :class:`'^'`,
- and :class:`'&'` operators to combine simple expressions into
- more complex ones
- - associate names with your parsed results using
- :class:`ParserElement.set_results_name`
- - access the parsed data, which is returned as a :class:`ParseResults`
- object
- - find some helpful expression short-cuts like :class:`DelimitedList`
- and :class:`one_of`
- - find more useful common expressions in the :class:`pyparsing_common`
- namespace class
-"""
-from typing import NamedTuple
-
-
-class version_info(NamedTuple):
- major: int
- minor: int
- micro: int
- releaselevel: str
- serial: int
-
- @property
- def __version__(self):
- return (
- f"{self.major}.{self.minor}.{self.micro}"
- + (
- f"{'r' if self.releaselevel[0] == 'c' else ''}{self.releaselevel[0]}{self.serial}",
- "",
- )[self.releaselevel == "final"]
- )
-
- def __str__(self):
- return f"{__name__} {self.__version__} / {__version_time__}"
-
- def __repr__(self):
- return f"{__name__}.{type(self).__name__}({', '.join('{}={!r}'.format(*nv) for nv in zip(self._fields, self))})"
-
-
-__version_info__ = version_info(3, 1, 0, "final", 1)
-__version_time__ = "18 Jun 2023 14:05 UTC"
-__version__ = __version_info__.__version__
-__versionTime__ = __version_time__
-__author__ = "Paul McGuire "
-
-from .util import *
-from .exceptions import *
-from .actions import *
-from .core import __diag__, __compat__
-from .results import *
-from .core import * # type: ignore[misc, assignment]
-from .core import _builtin_exprs as core_builtin_exprs
-from .helpers import * # type: ignore[misc, assignment]
-from .helpers import _builtin_exprs as helper_builtin_exprs
-
-from .unicode import unicode_set, UnicodeRangeList, pyparsing_unicode as unicode
-from .testing import pyparsing_test as testing
-from .common import (
- pyparsing_common as common,
- _builtin_exprs as common_builtin_exprs,
-)
-
-# define backward compat synonyms
-if "pyparsing_unicode" not in globals():
- pyparsing_unicode = unicode # type: ignore[misc]
-if "pyparsing_common" not in globals():
- pyparsing_common = common # type: ignore[misc]
-if "pyparsing_test" not in globals():
- pyparsing_test = testing # type: ignore[misc]
-
-core_builtin_exprs += common_builtin_exprs + helper_builtin_exprs
-
-
-__all__ = [
- "__version__",
- "__version_time__",
- "__author__",
- "__compat__",
- "__diag__",
- "And",
- "AtLineStart",
- "AtStringStart",
- "CaselessKeyword",
- "CaselessLiteral",
- "CharsNotIn",
- "CloseMatch",
- "Combine",
- "DelimitedList",
- "Dict",
- "Each",
- "Empty",
- "FollowedBy",
- "Forward",
- "GoToColumn",
- "Group",
- "IndentedBlock",
- "Keyword",
- "LineEnd",
- "LineStart",
- "Literal",
- "Located",
- "PrecededBy",
- "MatchFirst",
- "NoMatch",
- "NotAny",
- "OneOrMore",
- "OnlyOnce",
- "OpAssoc",
- "Opt",
- "Optional",
- "Or",
- "ParseBaseException",
- "ParseElementEnhance",
- "ParseException",
- "ParseExpression",
- "ParseFatalException",
- "ParseResults",
- "ParseSyntaxException",
- "ParserElement",
- "PositionToken",
- "QuotedString",
- "RecursiveGrammarException",
- "Regex",
- "SkipTo",
- "StringEnd",
- "StringStart",
- "Suppress",
- "Token",
- "TokenConverter",
- "White",
- "Word",
- "WordEnd",
- "WordStart",
- "ZeroOrMore",
- "Char",
- "alphanums",
- "alphas",
- "alphas8bit",
- "any_close_tag",
- "any_open_tag",
- "autoname_elements",
- "c_style_comment",
- "col",
- "common_html_entity",
- "condition_as_parse_action",
- "counted_array",
- "cpp_style_comment",
- "dbl_quoted_string",
- "dbl_slash_comment",
- "delimited_list",
- "dict_of",
- "empty",
- "hexnums",
- "html_comment",
- "identchars",
- "identbodychars",
- "infix_notation",
- "java_style_comment",
- "line",
- "line_end",
- "line_start",
- "lineno",
- "make_html_tags",
- "make_xml_tags",
- "match_only_at_col",
- "match_previous_expr",
- "match_previous_literal",
- "nested_expr",
- "null_debug_action",
- "nums",
- "one_of",
- "original_text_for",
- "printables",
- "punc8bit",
- "pyparsing_common",
- "pyparsing_test",
- "pyparsing_unicode",
- "python_style_comment",
- "quoted_string",
- "remove_quotes",
- "replace_with",
- "replace_html_entity",
- "rest_of_line",
- "sgl_quoted_string",
- "srange",
- "string_end",
- "string_start",
- "token_map",
- "trace_parse_action",
- "ungroup",
- "unicode_set",
- "unicode_string",
- "with_attribute",
- "with_class",
- # pre-PEP8 compatibility names
- "__versionTime__",
- "anyCloseTag",
- "anyOpenTag",
- "cStyleComment",
- "commonHTMLEntity",
- "conditionAsParseAction",
- "countedArray",
- "cppStyleComment",
- "dblQuotedString",
- "dblSlashComment",
- "delimitedList",
- "dictOf",
- "htmlComment",
- "indentedBlock",
- "infixNotation",
- "javaStyleComment",
- "lineEnd",
- "lineStart",
- "locatedExpr",
- "makeHTMLTags",
- "makeXMLTags",
- "matchOnlyAtCol",
- "matchPreviousExpr",
- "matchPreviousLiteral",
- "nestedExpr",
- "nullDebugAction",
- "oneOf",
- "opAssoc",
- "originalTextFor",
- "pythonStyleComment",
- "quotedString",
- "removeQuotes",
- "replaceHTMLEntity",
- "replaceWith",
- "restOfLine",
- "sglQuotedString",
- "stringEnd",
- "stringStart",
- "tokenMap",
- "traceParseAction",
- "unicodeString",
- "withAttribute",
- "withClass",
-]
diff --git a/spaces/TandCAcceptMe/face-swap-docker/roop/ui.py b/spaces/TandCAcceptMe/face-swap-docker/roop/ui.py
deleted file mode 100644
index 6bc10132dcd7732ee5494087d977dbadf302daab..0000000000000000000000000000000000000000
--- a/spaces/TandCAcceptMe/face-swap-docker/roop/ui.py
+++ /dev/null
@@ -1,678 +0,0 @@
-import os
-import time
-import gradio as gr
-import cv2
-import pathlib
-import shutil
-import roop.globals
-import roop.metadata
-import roop.utilities as util
-
-from roop.face_helper import extract_face_images
-from roop.capturer import get_video_frame, get_video_frame_total, get_image_frame
-
-restart_server = False
-live_cam_active = False
-
-RECENT_DIRECTORY_SOURCE = None
-RECENT_DIRECTORY_TARGET = None
-RECENT_DIRECTORY_OUTPUT = None
-
-SELECTION_FACES_DATA = None
-
-last_image = None
-
-input_thumbs = []
-target_thumbs = []
-
-
-IS_INPUT = True
-SELECTED_FACE_INDEX = 0
-
-SELECTED_INPUT_FACE_INDEX = 0
-SELECTED_TARGET_FACE_INDEX = 0
-
-roop.globals.keep_fps = None
-roop.globals.keep_frames = None
-roop.globals.skip_audio = None
-roop.globals.use_batch = None
-
-input_faces = None
-target_faces = None
-face_selection = None
-fake_cam_image = None
-
-current_cam_image = None
-cam_swapping = False
-
-selected_preview_index = 0
-
-
-def prepare_environment():
- roop.globals.output_path = os.path.abspath(os.path.join(os.getcwd(), "output"))
- os.makedirs(roop.globals.output_path, exist_ok=True)
- os.environ["TEMP"] = os.environ["TMP"] = os.path.abspath(os.path.join(os.getcwd(), "temp"))
- os.makedirs(os.environ["TEMP"], exist_ok=True)
- os.environ["GRADIO_TEMP_DIR"] = os.environ["TEMP"]
-
-
-def run():
- from roop.core import suggest_execution_providers
- global input_faces, target_faces, face_selection, fake_cam_image, restart_server, live_cam_active, on_settings_changed
-
- prepare_environment()
-
- available_themes = ["Default", "gradio/glass", "gradio/monochrome", "gradio/seafoam", "gradio/soft", "gstaff/xkcd", "freddyaboulton/dracula_revamped", "ysharma/steampunk"]
- image_formats = ['jpg','png', 'webp']
- video_formats = ['avi','mkv', 'mp4', 'webm']
- video_codecs = ['libx264', 'libx265', 'libvpx-vp9', 'h264_nvenc', 'hevc_nvenc']
- providerlist = suggest_execution_providers()
-
- server_name = roop.globals.CFG.server_name
- if server_name is None or len(server_name) < 1:
- server_name = None
- server_port = roop.globals.CFG.server_port
- if server_port <= 0:
- server_port = None
-
- settings_controls = []
-
- live_cam_active = False
- run_server = True
-
- while run_server:
- with gr.Blocks(title=f'{roop.metadata.name} {roop.metadata.version}', theme=roop.globals.CFG.selected_theme, css="span {color: var(--block-info-text-color)}") as ui:
- with gr.Row(variant='panel'):
- gr.Markdown(f"### [{roop.metadata.name} {roop.metadata.version}](https://github.com/C0untFloyd/roop-unleashed)")
- gr.HTML(util.create_version_html(), elem_id="versions")
- with gr.Tab("Face Swap"):
- with gr.Row():
- with gr.Column():
- input_faces = gr.Gallery(label="Input faces", allow_preview=True, preview=True, height=128, object_fit="scale-down")
- with gr.Row():
- bt_remove_selected_input_face = gr.Button("Remove selected")
- bt_clear_input_faces = gr.Button("Clear all", variant='stop')
- bt_srcimg = gr.Image(label='Source Face Image', type='filepath', tool=None)
- with gr.Column():
- target_faces = gr.Gallery(label="Target faces", allow_preview=True, preview=True, height=128, object_fit="scale-down")
- with gr.Row():
- bt_remove_selected_target_face = gr.Button("Remove selected")
- bt_destfiles = gr.Files(label='Target File(s)', file_count="multiple", elem_id='filelist')
- with gr.Row():
- with gr.Column(visible=False) as dynamic_face_selection:
- face_selection = gr.Gallery(label="Detected faces", allow_preview=True, preview=True, height=256, object_fit="scale-down")
- with gr.Row():
- bt_faceselect = gr.Button("Use selected face")
- bt_cancelfaceselect = gr.Button("Cancel")
-
- with gr.Row():
- with gr.Column():
- selected_face_detection = gr.Dropdown(["First found", "All faces", "Selected face", "All female", "All male"], value="First found", label="Select face swapping method")
- max_face_distance = gr.Slider(0.01, 1.0, value=0.65, label="Max Face Similarity Threshold")
- with gr.Column():
- roop.globals.keep_fps = gr.Checkbox(label="Keep FPS", value=True)
- roop.globals.keep_frames = gr.Checkbox(label="Keep Frames", value=False)
- roop.globals.skip_audio = gr.Checkbox(label="Skip audio", value=False)
- with gr.Row():
- with gr.Column():
- selected_enhancer = gr.Dropdown(["None", "Codeformer", "DMDNet", "GFPGAN"], value="None", label="Select post-processing")
- with gr.Accordion(label="Masking", open=True):
- chk_useclip = gr.Checkbox(label="Use Text to Clip Masking", value=False)
- clip_text = gr.Textbox(label="List of objects to mask and restore back on fake image", placeholder="hands,hair")
-
- with gr.Column():
- blend_ratio = gr.Slider(0.0, 1.0, value=0.65, label="Original/Enhanced image blend ratio")
- with gr.Row(variant='panel'):
- with gr.Column():
- bt_start = gr.Button("Start", variant='primary')
- with gr.Column():
- gr.Markdown(' ')
- with gr.Column():
- fake_preview = gr.Checkbox(label="Face swap frames", value=False)
- with gr.Column():
- bt_refresh_preview = gr.Button("Refresh", variant='secondary')
- with gr.Row(variant='panel'):
- with gr.Column():
- with gr.Accordion(label="Results", open=True):
- resultfiles = gr.Files(label='Processed File(s)', interactive=False)
- resultimage = gr.Image(type='filepath', interactive=False)
- with gr.Column():
- with gr.Accordion(label="Preview Original/Fake Frame", open=True):
- previewimage = gr.Image(label="Preview Image", interactive=False)
- with gr.Column():
- preview_frame_num = gr.Slider(0, 0, value=0, label="Frame Number", step=1.0)
- bt_use_face_from_preview = gr.Button("Use Face from this Frame", variant='primary')
-
- with gr.Tab("Live Cam"):
- cam_toggle = gr.Checkbox(label='Activate', value=live_cam_active)
- if live_cam_active:
- with gr.Row():
- with gr.Column():
- cam = gr.Webcam(label='Camera', source='webcam', interactive=True, streaming=False)
- with gr.Column():
- fake_cam_image = gr.Image(label='Fake Camera Output', interactive=False)
-
-
- with gr.Tab("Extras"):
- with gr.Row():
- files_to_process = gr.Files(label='File(s) to process', file_count="multiple")
- with gr.Row(variant='panel'):
- with gr.Accordion(label="Post process", open=False):
- with gr.Column():
- selected_post_enhancer = gr.Dropdown(["None", "Codeformer", "GFPGAN"], value="None", label="Select post-processing")
- with gr.Column():
- gr.Button("Start").click(fn=lambda: gr.Info('Not yet implemented...'))
- with gr.Row(variant='panel'):
- with gr.Accordion(label="Video/GIF", open=False):
- with gr.Row(variant='panel'):
- with gr.Column():
- gr.Markdown("""
- # Cut video
- Be aware that this means re-encoding the video which might take a longer time.
- Encoding uses your configuration from the Settings Tab.
- """)
- with gr.Column():
- cut_start_time = gr.Slider(0, 100000, value=0, label="Start Frame", step=1.0, interactive=True)
- with gr.Column():
- cut_end_time = gr.Slider(1, 100000, value=1, label="End Frame", step=1.0, interactive=True)
- with gr.Column():
- start_cut_video = gr.Button("Start")
-
- # with gr.Row(variant='panel'):
- # with gr.Column():
- # gr.Markdown("""
- # # Join videos
- # This also re-encodes the videos like cutting above.
- # """)
- # with gr.Column():
- # start_join_videos = gr.Button("Start")
- with gr.Row(variant='panel'):
- gr.Markdown("Extract frames from video")
- start_extract_frames = gr.Button("Start")
- with gr.Row(variant='panel'):
- gr.Markdown("Create video from image files")
- gr.Button("Start").click(fn=lambda: gr.Info('Not yet implemented...'))
- with gr.Row(variant='panel'):
- gr.Markdown("Create GIF from video")
- start_create_gif = gr.Button("Create GIF")
- with gr.Row():
- extra_files_output = gr.Files(label='Resulting output files', file_count="multiple")
-
-
- with gr.Tab("Settings"):
- with gr.Row():
- with gr.Column():
- themes = gr.Dropdown(available_themes, label="Theme", info="Change needs complete restart", value=roop.globals.CFG.selected_theme)
- with gr.Column():
- settings_controls.append(gr.Checkbox(label="Public Server", value=roop.globals.CFG.server_share, elem_id='server_share', interactive=True))
- settings_controls.append(gr.Checkbox(label='Clear output folder before each run', value=roop.globals.CFG.clear_output, elem_id='clear_output', interactive=True))
- with gr.Column():
- input_server_name = gr.Textbox(label="Server Name", lines=1, info="Leave blank to run locally", value=roop.globals.CFG.server_name)
- with gr.Column():
- input_server_port = gr.Number(label="Server Port", precision=0, info="Leave at 0 to use default", value=roop.globals.CFG.server_port)
- with gr.Row():
- with gr.Column():
- max_threads = gr.Slider(1, 64, value=roop.globals.CFG.max_threads, label="Max. Number of Threads", info='default: 8', step=1.0, interactive=True)
- settings_controls.append(gr.Dropdown(image_formats, label="Image Output Format", info='default: png', value=roop.globals.CFG.output_image_format, elem_id='output_image_format', interactive=True))
- button_clean_temp = gr.Button("Clean temp folder")
- with gr.Column():
- settings_controls.append(gr.Dropdown(providerlist, label="Provider", value=roop.globals.CFG.provider, elem_id='provider', interactive=True))
- settings_controls.append(gr.Dropdown(video_formats, label="Video Output Format", info='default: mp4', value=roop.globals.CFG.output_video_format, elem_id='output_video_format', interactive=True))
- button_apply_settings = gr.Button("Apply Settings")
- with gr.Column():
- settings_controls.append(gr.Dropdown(video_codecs, label="Video Codec", info='default: libx264', value=roop.globals.CFG.output_video_codec, elem_id='output_video_codec', interactive=True))
- video_quality = gr.Slider(0, 100, value=roop.globals.CFG.video_quality, label="Video Quality (crf)", info='default: 14', step=1.0, interactive=True)
- with gr.Column():
- button_apply_restart = gr.Button("Restart Server", variant='primary')
-
- input_faces.select(on_select_input_face, None, None)
- bt_remove_selected_input_face.click(fn=remove_selected_input_face, outputs=[input_faces])
- bt_srcimg.change(fn=on_srcimg_changed, show_progress='full', inputs=bt_srcimg, outputs=[dynamic_face_selection, face_selection, input_faces])
-
-
- target_faces.select(on_select_target_face, None, None)
- bt_remove_selected_target_face.click(fn=remove_selected_target_face, outputs=[target_faces])
-
- bt_destfiles.select(fn=on_destfiles_selected, inputs=[bt_destfiles], outputs=[previewimage, preview_frame_num])
- bt_destfiles.clear(fn=on_clear_destfiles, outputs=[target_faces])
- resultfiles.select(fn=on_resultfiles_selected, inputs=[resultfiles], outputs=[resultimage])
-
- face_selection.select(on_select_face, None, None)
- bt_faceselect.click(fn=on_selected_face, outputs=[dynamic_face_selection, face_selection, input_faces, target_faces])
- bt_clear_input_faces.click(fn=on_clear_input_faces, outputs=[input_faces])
-
- bt_start.click(fn=start_swap,
- inputs=[selected_enhancer, selected_face_detection, roop.globals.keep_fps, roop.globals.keep_frames,
- roop.globals.skip_audio, max_face_distance, blend_ratio, bt_destfiles, chk_useclip, clip_text],
- outputs=[resultfiles, resultimage])
-
- previewinputs = [preview_frame_num, bt_destfiles, fake_preview, selected_enhancer, selected_face_detection,
- max_face_distance, blend_ratio, bt_destfiles, chk_useclip, clip_text]
- bt_refresh_preview.click(fn=on_preview_frame_changed, inputs=previewinputs, outputs=[previewimage])
- fake_preview.change(fn=on_preview_frame_changed, inputs=previewinputs, outputs=[previewimage])
- preview_frame_num.change(fn=on_preview_frame_changed, inputs=previewinputs, outputs=[previewimage], show_progress='hidden')
- bt_use_face_from_preview.click(fn=on_use_face_from_selected, show_progress='full', inputs=[bt_destfiles, preview_frame_num], outputs=[dynamic_face_selection, face_selection, target_faces])
-
-
- # Live Cam
- cam_toggle.change(fn=on_cam_toggle, inputs=[cam_toggle])
- if live_cam_active:
- cam.stream(on_stream_swap_cam, inputs=[cam, selected_enhancer, blend_ratio], outputs=[fake_cam_image], show_progress="hidden")
-
- # Extras
- start_cut_video.click(fn=on_cut_video, inputs=[files_to_process, cut_start_time, cut_end_time], outputs=[extra_files_output])
- # start_join_videos.click(fn=on_join_videos, inputs=[files_to_process], outputs=[extra_files_output])
- start_extract_frames.click(fn=on_extract_frames, inputs=[files_to_process], outputs=[extra_files_output])
- start_create_gif.click(fn=on_create_gif, inputs=[files_to_process], outputs=[extra_files_output])
-
- # Settings
- for s in settings_controls:
- s.select(fn=on_settings_changed)
- max_threads.input(fn=lambda a,b='max_threads':on_settings_changed_misc(a,b), inputs=[max_threads])
- video_quality.input(fn=lambda a,b='video_quality':on_settings_changed_misc(a,b), inputs=[video_quality])
-
- button_clean_temp.click(fn=clean_temp, outputs=[bt_srcimg, input_faces, target_faces, bt_destfiles])
- button_apply_settings.click(apply_settings, inputs=[themes, input_server_name, input_server_port])
- button_apply_restart.click(restart)
-
-
-
- restart_server = False
- try:
- ui.queue().launch(inbrowser=True, server_name=server_name, server_port=server_port, prevent_thread_lock=True, show_error=True)
- except:
- restart_server = True
- run_server = False
- try:
- while restart_server == False:
- time.sleep(5.0)
- except (KeyboardInterrupt, OSError):
- print("Keyboard interruption in main thread... closing server.")
- run_server = False
- ui.close()
-
-def on_settings_changed_misc(new_val, attribname):
- if hasattr(roop.globals.CFG, attribname):
- setattr(roop.globals.CFG, attribname, new_val)
- else:
- print("Didn't find attrib!")
-
-
-
-def on_settings_changed(evt: gr.SelectData):
- attribname = evt.target.elem_id
- if isinstance(evt.target, gr.Checkbox):
- if hasattr(roop.globals.CFG, attribname):
- setattr(roop.globals.CFG, attribname, evt.selected)
- return
- elif isinstance(evt.target, gr.Dropdown):
- if hasattr(roop.globals.CFG, attribname):
- setattr(roop.globals.CFG, attribname, evt.value)
- return
-
- raise gr.Error(f'Unhandled Setting for {evt.target}')
-
-
-
-def on_srcimg_changed(imgsrc, progress=gr.Progress()):
- global RECENT_DIRECTORY_SOURCE, SELECTION_FACES_DATA, IS_INPUT, input_faces, face_selection, input_thumbs, last_image
-
- IS_INPUT = True
-
- if imgsrc == None or last_image == imgsrc:
- return gr.Column.update(visible=False), None, input_thumbs
-
- last_image = imgsrc
-
- progress(0, desc="Retrieving faces from image", )
- source_path = imgsrc
- thumbs = []
- if util.is_image(source_path):
- roop.globals.source_path = source_path
- RECENT_DIRECTORY_SOURCE = os.path.dirname(roop.globals.source_path)
- SELECTION_FACES_DATA = extract_face_images(roop.globals.source_path, (False, 0))
- progress(0.5, desc="Retrieving faces from image")
- for f in SELECTION_FACES_DATA:
- image = convert_to_gradio(f[1])
- thumbs.append(image)
-
- progress(1.0, desc="Retrieving faces from image")
- if len(thumbs) < 1:
- raise gr.Error('No faces detected!')
-
- if len(thumbs) == 1:
- roop.globals.SELECTED_FACE_DATA_INPUT = SELECTION_FACES_DATA[0][0]
- input_thumbs.append(thumbs[0])
- return gr.Column.update(visible=False), None, input_thumbs
-
- return gr.Column.update(visible=True), thumbs, gr.Gallery.update(visible=True)
-
-def on_select_input_face(evt: gr.SelectData):
- global SELECTED_INPUT_FACE_INDEX
-
- SELECTED_INPUT_FACE_INDEX = evt.index
-
-def remove_selected_input_face():
- global input_thumbs, SELECTED_INPUT_FACE_INDEX
-
- if len(input_thumbs) > SELECTED_INPUT_FACE_INDEX:
- f = input_thumbs.pop(SELECTED_INPUT_FACE_INDEX)
- del f
-
- return input_thumbs
-
-def on_select_target_face(evt: gr.SelectData):
- global SELECTED_TARGET_FACE_INDEX
-
- SELECTED_TARGET_FACE_INDEX = evt.index
-
-def remove_selected_target_face():
- global target_thumbs, SELECTED_TARGET_FACE_INDEX
-
- if len(target_thumbs) > SELECTED_TARGET_FACE_INDEX:
- f = target_thumbs.pop(SELECTED_TARGET_FACE_INDEX)
- del f
- return target_thumbs
-
-
-
-
-
-def on_use_face_from_selected(files, frame_num):
- global IS_INPUT, SELECTION_FACES_DATA
-
- IS_INPUT = False
- thumbs = []
-
- roop.globals.target_path = files[selected_preview_index].name
- if util.is_image(roop.globals.target_path) and not roop.globals.target_path.lower().endswith(('gif')):
- SELECTION_FACES_DATA = extract_face_images(roop.globals.target_path, (False, 0))
- if len(SELECTION_FACES_DATA) > 0:
- for f in SELECTION_FACES_DATA:
- image = convert_to_gradio(f[1])
- thumbs.append(image)
- else:
- gr.Info('No faces detected!')
- roop.globals.target_path = None
-
- elif util.is_video(roop.globals.target_path) or roop.globals.target_path.lower().endswith(('gif')):
- selected_frame = frame_num
- SELECTION_FACES_DATA = extract_face_images(roop.globals.target_path, (True, selected_frame))
- if len(SELECTION_FACES_DATA) > 0:
- for f in SELECTION_FACES_DATA:
- image = convert_to_gradio(f[1])
- thumbs.append(image)
- else:
- gr.Info('No faces detected!')
- roop.globals.target_path = None
-
- if len(thumbs) == 1:
- roop.globals.SELECTED_FACE_DATA_OUTPUT = SELECTION_FACES_DATA[0][0]
- target_thumbs.append(thumbs[0])
- return gr.Row.update(visible=False), None, target_thumbs
-
- return gr.Row.update(visible=True), thumbs, gr.Gallery.update(visible=True)
-
-
-
-def on_select_face(evt: gr.SelectData): # SelectData is a subclass of EventData
- global SELECTED_FACE_INDEX
- SELECTED_FACE_INDEX = evt.index
-
-
-def on_selected_face():
- global IS_INPUT, SELECTED_FACE_INDEX, SELECTION_FACES_DATA, input_thumbs, target_thumbs
-
- fd = SELECTION_FACES_DATA[SELECTED_FACE_INDEX]
- image = convert_to_gradio(fd[1])
- if IS_INPUT:
- roop.globals.SELECTED_FACE_DATA_INPUT = fd[0]
- input_thumbs.append(image)
- return gr.Column.update(visible=False), None, input_thumbs, gr.Gallery.update(visible=True)
- else:
- roop.globals.SELECTED_FACE_DATA_OUTPUT = fd[0]
- target_thumbs.append(image)
- return gr.Column.update(visible=False), None, gr.Gallery.update(visible=True), target_thumbs
-
-# bt_faceselect.click(fn=on_selected_face, outputs=[dynamic_face_selection, face_selection, input_faces, target_faces])
-
-
-
-
-def on_preview_frame_changed(frame_num, files, fake_preview, enhancer, detection, face_distance, blend_ratio, target_files, use_clip, clip_text):
- from roop.core import live_swap
-
- filename = files[selected_preview_index].name
- if util.is_video(filename) or filename.lower().endswith('gif'):
- current_frame = get_video_frame(filename, frame_num)
- else:
- current_frame = get_image_frame(filename)
- if current_frame is None:
- return None
-
- if not fake_preview or roop.globals.SELECTED_FACE_DATA_INPUT is None:
- return convert_to_gradio(current_frame)
-
- roop.globals.face_swap_mode = translate_swap_mode(detection)
- roop.globals.selected_enhancer = enhancer
- roop.globals.distance_threshold = face_distance
- roop.globals.blend_ratio = blend_ratio
-
- if use_clip and clip_text is None or len(clip_text) < 1:
- use_clip = False
-
- roop.globals.execution_threads = roop.globals.CFG.max_threads
- current_frame = live_swap(current_frame, roop.globals.face_swap_mode, use_clip, clip_text)
- if current_frame is None:
- return None
- return convert_to_gradio(current_frame)
-
-
-
-
-def on_clear_input_faces():
- global input_thumbs
-
- input_thumbs = []
- roop.globals.SELECTED_FACE_DATA_INPUT = None
- return input_thumbs
-
-def on_clear_destfiles():
- global target_thumbs
-
- roop.globals.SELECTED_FACE_DATA_OUTPUT = None
- target_thumbs = []
- return target_thumbs
-
-
-
-def translate_swap_mode(dropdown_text):
- if dropdown_text == "Selected face":
- return "selected"
- elif dropdown_text == "First found":
- return "first"
- elif dropdown_text == "All female":
- return "all_female"
- elif dropdown_text == "All male":
- return "all_male"
-
- return "all"
-
-
-
-def start_swap(enhancer, detection, keep_fps, keep_frames, skip_audio, face_distance, blend_ratio, target_files, use_clip, clip_text):
- from roop.core import batch_process
-
- if target_files is None or len(target_files) <= 0:
- return None, None
-
- if roop.globals.CFG.clear_output:
- shutil.rmtree(roop.globals.output_path)
-
- prepare_environment()
-
- roop.globals.selected_enhancer = enhancer
- roop.globals.target_path = None
- roop.globals.distance_threshold = face_distance
- roop.globals.blend_ratio = blend_ratio
- roop.globals.keep_fps = keep_fps
- roop.globals.keep_frames = keep_frames
- roop.globals.skip_audio = skip_audio
- roop.globals.face_swap_mode = translate_swap_mode(detection)
- if use_clip and clip_text is None or len(clip_text) < 1:
- use_clip = False
-
- if roop.globals.face_swap_mode == 'selected':
- if roop.globals.SELECTED_FACE_DATA_OUTPUT is None or len(roop.globals.SELECTED_FACE_DATA_OUTPUT) < 1:
- gr.Error('No Target Face selected!')
- return None, None
-
- roop.globals.execution_threads = roop.globals.CFG.max_threads
- roop.globals.video_encoder = roop.globals.CFG.output_video_codec
- roop.globals.video_quality = roop.globals.CFG.video_quality
-
- batch_process([file.name for file in target_files], use_clip, clip_text)
- outdir = pathlib.Path(roop.globals.output_path)
- outfiles = [item for item in outdir.iterdir() if item.is_file()]
- if len(outfiles) > 0:
- return outfiles, outfiles[0]
- return None, None
-
-
-
-def on_destfiles_selected(evt: gr.SelectData, target_files):
- global selected_preview_index
-
- selected_preview_index = evt.index
- filename = target_files[selected_preview_index].name
- if util.is_video(filename) or filename.lower().endswith('gif'):
- current_frame = get_video_frame(filename, 0)
- total_frames = get_video_frame_total(filename)
- else:
- current_frame = get_image_frame(filename)
- total_frames = 0
-
- current_frame = convert_to_gradio(current_frame)
- return current_frame, gr.Slider.update(value=0, maximum=total_frames)
-
-
-def on_resultfiles_selected(evt: gr.SelectData, files):
- selected_index = evt.index
- filename = files[selected_index].name
- if util.is_video(filename) or filename.lower().endswith('gif'):
- current_frame = get_video_frame(filename, 0)
- else:
- current_frame = get_image_frame(filename)
- return convert_to_gradio(current_frame)
-
-
-
-def on_cam_toggle(state):
- global live_cam_active, restart_server
-
- live_cam_active = state
- gr.Warning('Server will be restarted for this change!')
- restart_server = True
-
-
-def on_stream_swap_cam(camimage, enhancer, blend_ratio):
- from roop.core import live_swap
- global current_cam_image, cam_counter, cam_swapping, fake_cam_image
-
- roop.globals.selected_enhancer = enhancer
- roop.globals.blend_ratio = blend_ratio
-
- if not cam_swapping and roop.globals.SELECTED_FACE_DATA_INPUT is not None:
- cam_swapping = True
- current_cam_image = live_swap(camimage, "all", False, None)
- cam_swapping = False
-
- return current_cam_image
-
-
-def on_cut_video(files, cut_start_frame, cut_end_frame):
- resultfiles = []
- for tf in files:
- f = tf.name
- # destfile = get_destfilename_from_path(f, resolve_relative_path('./output'), '_cut')
- destfile = util.get_destfilename_from_path(f, './output', '_cut')
- util.cut_video(f, destfile, cut_start_frame, cut_end_frame)
- if os.path.isfile(destfile):
- resultfiles.append(destfile)
- else:
- gr.Error('Cutting video failed!')
- return resultfiles
-
-def on_join_videos(files):
- filenames = []
- for f in files:
- filenames.append(f.name)
- destfile = util.get_destfilename_from_path(filenames[0], './output', '_join')
- util.join_videos(filenames, destfile)
- resultfiles = []
- if os.path.isfile(destfile):
- resultfiles.append(destfile)
- else:
- gr.Error('Joining videos failed!')
- return resultfiles
-
-
-
-
-def on_extract_frames(files):
- resultfiles = []
- for tf in files:
- f = tf.name
- resfolder = util.extract_frames(f)
- for file in os.listdir(resfolder):
- outfile = os.path.join(resfolder, file)
- if os.path.isfile(outfile):
- resultfiles.append(outfile)
- return resultfiles
-
-
-def on_create_gif(files):
- for tf in files:
- f = tf.name
- gifname = util.get_destfilename_from_path(f, './output', '.gif')
- util.create_gif_from_video(f, gifname)
-
- return gifname
-
-
-
-
-
-def clean_temp():
- global input_thumbs, target_thumbs
-
- shutil.rmtree(os.environ["TEMP"])
- prepare_environment()
-
- input_thumbs = []
- roop.globals.SELECTED_FACE_DATA_INPUT = None
- roop.globals.SELECTED_FACE_DATA_OUTPUT = None
- target_thumbs = []
- gr.Info('Temp Files removed')
- return None,None,None,None
-
-
-def apply_settings(themes, input_server_name, input_server_port):
- roop.globals.CFG.selected_theme = themes
- roop.globals.CFG.server_name = input_server_name
- roop.globals.CFG.server_port = input_server_port
- roop.globals.CFG.save()
- gr.Info('Settings saved')
-
-def restart():
- global restart_server
- restart_server = True
-
-
-
-
-# Gradio wants Images in RGB
-def convert_to_gradio(image):
- if image is None:
- return None
- return cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
diff --git a/spaces/TencentARC/T2I-Adapter-SDXL/model.py b/spaces/TencentARC/T2I-Adapter-SDXL/model.py
deleted file mode 100644
index 041d33f6fbb7d8ee97821621c52147b318817f70..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/T2I-Adapter-SDXL/model.py
+++ /dev/null
@@ -1,374 +0,0 @@
-import gc
-import os
-from abc import ABC, abstractmethod
-
-import numpy as np
-import PIL.Image
-import torch
-from controlnet_aux import (
- CannyDetector,
- LineartDetector,
- MidasDetector,
- OpenposeDetector,
- PidiNetDetector,
- ZoeDetector,
-)
-from diffusers import (
- AutoencoderKL,
- EulerAncestralDiscreteScheduler,
- StableDiffusionXLAdapterPipeline,
- T2IAdapter,
-)
-
-SD_XL_BASE_RATIOS = {
- "0.5": (704, 1408),
- "0.52": (704, 1344),
- "0.57": (768, 1344),
- "0.6": (768, 1280),
- "0.68": (832, 1216),
- "0.72": (832, 1152),
- "0.78": (896, 1152),
- "0.82": (896, 1088),
- "0.88": (960, 1088),
- "0.94": (960, 1024),
- "1.0": (1024, 1024),
- "1.07": (1024, 960),
- "1.13": (1088, 960),
- "1.21": (1088, 896),
- "1.29": (1152, 896),
- "1.38": (1152, 832),
- "1.46": (1216, 832),
- "1.67": (1280, 768),
- "1.75": (1344, 768),
- "1.91": (1344, 704),
- "2.0": (1408, 704),
- "2.09": (1472, 704),
- "2.4": (1536, 640),
- "2.5": (1600, 640),
- "2.89": (1664, 576),
- "3.0": (1728, 576),
-}
-
-
-def find_closest_aspect_ratio(target_width: int, target_height: int) -> str:
- target_ratio = target_width / target_height
- closest_ratio = ""
- min_difference = float("inf")
-
- for ratio_str, (width, height) in SD_XL_BASE_RATIOS.items():
- ratio = width / height
- difference = abs(target_ratio - ratio)
-
- if difference < min_difference:
- min_difference = difference
- closest_ratio = ratio_str
-
- return closest_ratio
-
-
-def resize_to_closest_aspect_ratio(image: PIL.Image.Image) -> PIL.Image.Image:
- target_width, target_height = image.size
- closest_ratio = find_closest_aspect_ratio(target_width, target_height)
-
- # Get the dimensions from the closest aspect ratio in the dictionary
- new_width, new_height = SD_XL_BASE_RATIOS[closest_ratio]
-
- # Resize the image to the new dimensions while preserving the aspect ratio
- resized_image = image.resize((new_width, new_height), PIL.Image.LANCZOS)
-
- return resized_image
-
-
-ADAPTER_REPO_IDS = {
- "canny": "TencentARC/t2i-adapter-canny-sdxl-1.0",
- "sketch": "TencentARC/t2i-adapter-sketch-sdxl-1.0",
- "lineart": "TencentARC/t2i-adapter-lineart-sdxl-1.0",
- "depth-midas": "TencentARC/t2i-adapter-depth-midas-sdxl-1.0",
- "depth-zoe": "TencentARC/t2i-adapter-depth-zoe-sdxl-1.0",
- "openpose": "TencentARC/t2i-adapter-openpose-sdxl-1.0",
- # "recolor": "TencentARC/t2i-adapter-recolor-sdxl-1.0",
-}
-ADAPTER_NAMES = list(ADAPTER_REPO_IDS.keys())
-
-
-class Preprocessor(ABC):
- @abstractmethod
- def to(self, device: torch.device | str) -> "Preprocessor":
- pass
-
- @abstractmethod
- def __call__(self, image: PIL.Image.Image) -> PIL.Image.Image:
- pass
-
-
-class CannyPreprocessor(Preprocessor):
- def __init__(self):
- self.model = CannyDetector()
-
- def to(self, device: torch.device | str) -> Preprocessor:
- return self
-
- def __call__(self, image: PIL.Image.Image) -> PIL.Image.Image:
- return self.model(image, detect_resolution=384, image_resolution=1024)
-
-
-class LineartPreprocessor(Preprocessor):
- def __init__(self):
- self.model = LineartDetector.from_pretrained("lllyasviel/Annotators")
-
- def to(self, device: torch.device | str) -> Preprocessor:
- self.model.to(device)
- return self
-
- def __call__(self, image: PIL.Image.Image) -> PIL.Image.Image:
- return self.model(image, detect_resolution=384, image_resolution=1024)
-
-
-class MidasPreprocessor(Preprocessor):
- def __init__(self):
- self.model = MidasDetector.from_pretrained(
- "valhalla/t2iadapter-aux-models", filename="dpt_large_384.pt", model_type="dpt_large"
- )
-
- def to(self, device: torch.device | str) -> Preprocessor:
- self.model.to(device)
- return self
-
- def __call__(self, image: PIL.Image.Image) -> PIL.Image.Image:
- return self.model(image, detect_resolution=512, image_resolution=1024)
-
-
-class OpenposePreprocessor(Preprocessor):
- def __init__(self):
- self.model = OpenposeDetector.from_pretrained("lllyasviel/Annotators")
-
- def to(self, device: torch.device | str) -> Preprocessor:
- self.model.to(device)
- return self
-
- def __call__(self, image: PIL.Image.Image) -> PIL.Image.Image:
- out = self.model(image, detect_resolution=512, image_resolution=1024)
- out = np.array(out)[:, :, ::-1]
- out = PIL.Image.fromarray(np.uint8(out))
- return out
-
-
-class PidiNetPreprocessor(Preprocessor):
- def __init__(self):
- self.model = PidiNetDetector.from_pretrained("lllyasviel/Annotators")
-
- def to(self, device: torch.device | str) -> Preprocessor:
- self.model.to(device)
- return self
-
- def __call__(self, image: PIL.Image.Image) -> PIL.Image.Image:
- return self.model(image, detect_resolution=512, image_resolution=1024, apply_filter=True)
-
-
-class RecolorPreprocessor(Preprocessor):
- def to(self, device: torch.device | str) -> Preprocessor:
- return self
-
- def __call__(self, image: PIL.Image.Image) -> PIL.Image.Image:
- return image.convert("L").convert("RGB")
-
-
-class ZoePreprocessor(Preprocessor):
- def __init__(self):
- self.model = ZoeDetector.from_pretrained(
- "valhalla/t2iadapter-aux-models", filename="zoed_nk.pth", model_type="zoedepth_nk"
- )
-
- def to(self, device: torch.device | str) -> Preprocessor:
- self.model.to(device)
- return self
-
- def __call__(self, image: PIL.Image.Image) -> PIL.Image.Image:
- return self.model(image, gamma_corrected=True, image_resolution=1024)
-
-
-PRELOAD_PREPROCESSORS_IN_GPU_MEMORY = os.getenv("PRELOAD_PREPROCESSORS_IN_GPU_MEMORY", "0") == "1"
-PRELOAD_PREPROCESSORS_IN_CPU_MEMORY = os.getenv("PRELOAD_PREPROCESSORS_IN_CPU_MEMORY", "0") == "1"
-if PRELOAD_PREPROCESSORS_IN_GPU_MEMORY:
- device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- preprocessors_gpu: dict[str, Preprocessor] = {
- "canny": CannyPreprocessor().to(device),
- "sketch": PidiNetPreprocessor().to(device),
- "lineart": LineartPreprocessor().to(device),
- "depth-midas": MidasPreprocessor().to(device),
- "depth-zoe": ZoePreprocessor().to(device),
- "openpose": OpenposePreprocessor().to(device),
- "recolor": RecolorPreprocessor().to(device),
- }
-
- def get_preprocessor(adapter_name: str) -> Preprocessor:
- return preprocessors_gpu[adapter_name]
-
-elif PRELOAD_PREPROCESSORS_IN_CPU_MEMORY:
- preprocessors_cpu: dict[str, Preprocessor] = {
- "canny": CannyPreprocessor(),
- "sketch": PidiNetPreprocessor(),
- "lineart": LineartPreprocessor(),
- "depth-midas": MidasPreprocessor(),
- "depth-zoe": ZoePreprocessor(),
- "openpose": OpenposePreprocessor(),
- "recolor": RecolorPreprocessor(),
- }
-
- def get_preprocessor(adapter_name: str) -> Preprocessor:
- return preprocessors_cpu[adapter_name]
-
-else:
-
- def get_preprocessor(adapter_name: str) -> Preprocessor:
- if adapter_name == "canny":
- return CannyPreprocessor()
- elif adapter_name == "sketch":
- return PidiNetPreprocessor()
- elif adapter_name == "lineart":
- return LineartPreprocessor()
- elif adapter_name == "depth-midas":
- return MidasPreprocessor()
- elif adapter_name == "depth-zoe":
- return ZoePreprocessor()
- elif adapter_name == "openpose":
- return OpenposePreprocessor()
- elif adapter_name == "recolor":
- return RecolorPreprocessor()
- else:
- raise ValueError(f"Adapter name must be one of {ADAPTER_NAMES}")
-
- def download_all_preprocessors():
- for adapter_name in ADAPTER_NAMES:
- get_preprocessor(adapter_name)
- gc.collect()
-
- download_all_preprocessors()
-
-
-def download_all_adapters():
- for adapter_name in ADAPTER_NAMES:
- T2IAdapter.from_pretrained(
- ADAPTER_REPO_IDS[adapter_name],
- torch_dtype=torch.float16,
- varient="fp16",
- )
- gc.collect()
-
-
-class Model:
- MAX_NUM_INFERENCE_STEPS = 50
-
- def __init__(self, adapter_name: str):
- if adapter_name not in ADAPTER_NAMES:
- raise ValueError(f"Adapter name must be one of {ADAPTER_NAMES}")
-
- self.preprocessor_name = adapter_name
- self.adapter_name = adapter_name
-
- self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
- if torch.cuda.is_available():
- self.preprocessor = get_preprocessor(adapter_name).to(self.device)
-
- model_id = "stabilityai/stable-diffusion-xl-base-1.0"
- adapter = T2IAdapter.from_pretrained(
- ADAPTER_REPO_IDS[adapter_name],
- torch_dtype=torch.float16,
- varient="fp16",
- ).to(self.device)
- self.pipe = StableDiffusionXLAdapterPipeline.from_pretrained(
- model_id,
- vae=AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16),
- adapter=adapter,
- scheduler=EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler"),
- torch_dtype=torch.float16,
- variant="fp16",
- ).to(self.device)
- self.pipe.enable_xformers_memory_efficient_attention()
- self.pipe.load_lora_weights(
- "stabilityai/stable-diffusion-xl-base-1.0", weight_name="sd_xl_offset_example-lora_1.0.safetensors"
- )
- self.pipe.fuse_lora(lora_scale=0.4)
- else:
- self.preprocessor = None # type: ignore
- self.pipe = None
-
- def change_preprocessor(self, adapter_name: str) -> None:
- if adapter_name not in ADAPTER_NAMES:
- raise ValueError(f"Adapter name must be one of {ADAPTER_NAMES}")
- if adapter_name == self.preprocessor_name:
- return
-
- if PRELOAD_PREPROCESSORS_IN_GPU_MEMORY:
- pass
- elif PRELOAD_PREPROCESSORS_IN_CPU_MEMORY:
- self.preprocessor.to("cpu")
- else:
- del self.preprocessor
- self.preprocessor = get_preprocessor(adapter_name).to(self.device)
- self.preprocessor_name = adapter_name
- gc.collect()
- torch.cuda.empty_cache()
-
- def change_adapter(self, adapter_name: str) -> None:
- if adapter_name not in ADAPTER_NAMES:
- raise ValueError(f"Adapter name must be one of {ADAPTER_NAMES}")
- if adapter_name == self.adapter_name:
- return
- self.pipe.adapter = T2IAdapter.from_pretrained(
- ADAPTER_REPO_IDS[adapter_name],
- torch_dtype=torch.float16,
- varient="fp16",
- ).to(self.device)
- self.adapter_name = adapter_name
- gc.collect()
- torch.cuda.empty_cache()
-
- def resize_image(self, image: PIL.Image.Image) -> PIL.Image.Image:
- w, h = image.size
- scale = 1024 / max(w, h)
- new_w = int(w * scale)
- new_h = int(h * scale)
- return image.resize((new_w, new_h), PIL.Image.LANCZOS)
-
- def run(
- self,
- image: PIL.Image.Image,
- prompt: str,
- negative_prompt: str,
- adapter_name: str,
- num_inference_steps: int = 30,
- guidance_scale: float = 5.0,
- adapter_conditioning_scale: float = 1.0,
- adapter_conditioning_factor: float = 1.0,
- seed: int = 0,
- apply_preprocess: bool = True,
- ) -> list[PIL.Image.Image]:
- if not torch.cuda.is_available():
- raise RuntimeError("This demo does not work on CPU.")
- if num_inference_steps > self.MAX_NUM_INFERENCE_STEPS:
- raise ValueError(f"Number of steps must be less than {self.MAX_NUM_INFERENCE_STEPS}")
-
- # Resize image to avoid OOM
- image = self.resize_image(image)
-
- self.change_preprocessor(adapter_name)
- self.change_adapter(adapter_name)
-
- if apply_preprocess:
- image = self.preprocessor(image)
-
- image = resize_to_closest_aspect_ratio(image)
-
- generator = torch.Generator(device=self.device).manual_seed(seed)
- out = self.pipe(
- prompt=prompt,
- negative_prompt=negative_prompt,
- image=image,
- num_inference_steps=num_inference_steps,
- adapter_conditioning_scale=adapter_conditioning_scale,
- adapter_conditioning_factor=adapter_conditioning_factor,
- generator=generator,
- guidance_scale=guidance_scale,
- ).images[0]
- return [image, out]
diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/cityscapes.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/cityscapes.py
deleted file mode 100644
index 1e84a5bdb3d4e410d8eef4b80a5d4c099a180104..0000000000000000000000000000000000000000
--- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/data/datasets/cityscapes.py
+++ /dev/null
@@ -1,329 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import functools
-import json
-import logging
-import multiprocessing as mp
-import numpy as np
-import os
-from itertools import chain
-import pycocotools.mask as mask_util
-from PIL import Image
-
-from detectron2.structures import BoxMode
-from detectron2.utils.comm import get_world_size
-from detectron2.utils.file_io import PathManager
-from detectron2.utils.logger import setup_logger
-
-try:
- import cv2 # noqa
-except ImportError:
- # OpenCV is an optional dependency at the moment
- pass
-
-
-logger = logging.getLogger(__name__)
-
-
-def _get_cityscapes_files(image_dir, gt_dir):
- files = []
- # scan through the directory
- cities = PathManager.ls(image_dir)
- logger.info(f"{len(cities)} cities found in '{image_dir}'.")
- for city in cities:
- city_img_dir = os.path.join(image_dir, city)
- city_gt_dir = os.path.join(gt_dir, city)
- for basename in PathManager.ls(city_img_dir):
- image_file = os.path.join(city_img_dir, basename)
-
- suffix = "leftImg8bit.png"
- assert basename.endswith(suffix), basename
- basename = basename[: -len(suffix)]
-
- instance_file = os.path.join(city_gt_dir, basename + "gtFine_instanceIds.png")
- label_file = os.path.join(city_gt_dir, basename + "gtFine_labelIds.png")
- json_file = os.path.join(city_gt_dir, basename + "gtFine_polygons.json")
-
- files.append((image_file, instance_file, label_file, json_file))
- assert len(files), "No images found in {}".format(image_dir)
- for f in files[0]:
- assert PathManager.isfile(f), f
- return files
-
-
-def load_cityscapes_instances(image_dir, gt_dir, from_json=True, to_polygons=True):
- """
- Args:
- image_dir (str): path to the raw dataset. e.g., "~/cityscapes/leftImg8bit/train".
- gt_dir (str): path to the raw annotations. e.g., "~/cityscapes/gtFine/train".
- from_json (bool): whether to read annotations from the raw json file or the png files.
- to_polygons (bool): whether to represent the segmentation as polygons
- (COCO's format) instead of masks (cityscapes's format).
-
- Returns:
- list[dict]: a list of dicts in Detectron2 standard format. (See
- `Using Custom Datasets `_ )
- """
- if from_json:
- assert to_polygons, (
- "Cityscapes's json annotations are in polygon format. "
- "Converting to mask format is not supported now."
- )
- files = _get_cityscapes_files(image_dir, gt_dir)
-
- logger.info("Preprocessing cityscapes annotations ...")
- # This is still not fast: all workers will execute duplicate works and will
- # take up to 10m on a 8GPU server.
- pool = mp.Pool(processes=max(mp.cpu_count() // get_world_size() // 2, 4))
-
- ret = pool.map(
- functools.partial(_cityscapes_files_to_dict, from_json=from_json, to_polygons=to_polygons),
- files,
- )
- logger.info("Loaded {} images from {}".format(len(ret), image_dir))
-
- # Map cityscape ids to contiguous ids
- from cityscapesscripts.helpers.labels import labels
-
- labels = [l for l in labels if l.hasInstances and not l.ignoreInEval]
- dataset_id_to_contiguous_id = {l.id: idx for idx, l in enumerate(labels)}
- for dict_per_image in ret:
- for anno in dict_per_image["annotations"]:
- anno["category_id"] = dataset_id_to_contiguous_id[anno["category_id"]]
- return ret
-
-
-def load_cityscapes_semantic(image_dir, gt_dir):
- """
- Args:
- image_dir (str): path to the raw dataset. e.g., "~/cityscapes/leftImg8bit/train".
- gt_dir (str): path to the raw annotations. e.g., "~/cityscapes/gtFine/train".
-
- Returns:
- list[dict]: a list of dict, each has "file_name" and
- "sem_seg_file_name".
- """
- ret = []
- # gt_dir is small and contain many small files. make sense to fetch to local first
- gt_dir = PathManager.get_local_path(gt_dir)
- for image_file, _, label_file, json_file in _get_cityscapes_files(image_dir, gt_dir):
- label_file = label_file.replace("labelIds", "labelTrainIds")
-
- with PathManager.open(json_file, "r") as f:
- jsonobj = json.load(f)
- ret.append(
- {
- "file_name": image_file,
- "sem_seg_file_name": label_file,
- "height": jsonobj["imgHeight"],
- "width": jsonobj["imgWidth"],
- }
- )
- assert len(ret), f"No images found in {image_dir}!"
- assert PathManager.isfile(
- ret[0]["sem_seg_file_name"]
- ), "Please generate labelTrainIds.png with cityscapesscripts/preparation/createTrainIdLabelImgs.py" # noqa
- return ret
-
-
-def _cityscapes_files_to_dict(files, from_json, to_polygons):
- """
- Parse cityscapes annotation files to a instance segmentation dataset dict.
-
- Args:
- files (tuple): consists of (image_file, instance_id_file, label_id_file, json_file)
- from_json (bool): whether to read annotations from the raw json file or the png files.
- to_polygons (bool): whether to represent the segmentation as polygons
- (COCO's format) instead of masks (cityscapes's format).
-
- Returns:
- A dict in Detectron2 Dataset format.
- """
- from cityscapesscripts.helpers.labels import id2label, name2label
-
- image_file, instance_id_file, _, json_file = files
-
- annos = []
-
- if from_json:
- from shapely.geometry import MultiPolygon, Polygon
-
- with PathManager.open(json_file, "r") as f:
- jsonobj = json.load(f)
- ret = {
- "file_name": image_file,
- "image_id": os.path.basename(image_file),
- "height": jsonobj["imgHeight"],
- "width": jsonobj["imgWidth"],
- }
-
- # `polygons_union` contains the union of all valid polygons.
- polygons_union = Polygon()
-
- # CityscapesScripts draw the polygons in sequential order
- # and each polygon *overwrites* existing ones. See
- # (https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/preparation/json2instanceImg.py) # noqa
- # We use reverse order, and each polygon *avoids* early ones.
- # This will resolve the ploygon overlaps in the same way as CityscapesScripts.
- for obj in jsonobj["objects"][::-1]:
- if "deleted" in obj: # cityscapes data format specific
- continue
- label_name = obj["label"]
-
- try:
- label = name2label[label_name]
- except KeyError:
- if label_name.endswith("group"): # crowd area
- label = name2label[label_name[: -len("group")]]
- else:
- raise
- if label.id < 0: # cityscapes data format
- continue
-
- # Cityscapes's raw annotations uses integer coordinates
- # Therefore +0.5 here
- poly_coord = np.asarray(obj["polygon"], dtype="f4") + 0.5
- # CityscapesScript uses PIL.ImageDraw.polygon to rasterize
- # polygons for evaluation. This function operates in integer space
- # and draws each pixel whose center falls into the polygon.
- # Therefore it draws a polygon which is 0.5 "fatter" in expectation.
- # We therefore dilate the input polygon by 0.5 as our input.
- poly = Polygon(poly_coord).buffer(0.5, resolution=4)
-
- if not label.hasInstances or label.ignoreInEval:
- # even if we won't store the polygon it still contributes to overlaps resolution
- polygons_union = polygons_union.union(poly)
- continue
-
- # Take non-overlapping part of the polygon
- poly_wo_overlaps = poly.difference(polygons_union)
- if poly_wo_overlaps.is_empty:
- continue
- polygons_union = polygons_union.union(poly)
-
- anno = {}
- anno["iscrowd"] = label_name.endswith("group")
- anno["category_id"] = label.id
-
- if isinstance(poly_wo_overlaps, Polygon):
- poly_list = [poly_wo_overlaps]
- elif isinstance(poly_wo_overlaps, MultiPolygon):
- poly_list = poly_wo_overlaps.geoms
- else:
- raise NotImplementedError("Unknown geometric structure {}".format(poly_wo_overlaps))
-
- poly_coord = []
- for poly_el in poly_list:
- # COCO API can work only with exterior boundaries now, hence we store only them.
- # TODO: store both exterior and interior boundaries once other parts of the
- # codebase support holes in polygons.
- poly_coord.append(list(chain(*poly_el.exterior.coords)))
- anno["segmentation"] = poly_coord
- (xmin, ymin, xmax, ymax) = poly_wo_overlaps.bounds
-
- anno["bbox"] = (xmin, ymin, xmax, ymax)
- anno["bbox_mode"] = BoxMode.XYXY_ABS
-
- annos.append(anno)
- else:
- # See also the official annotation parsing scripts at
- # https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/instances2dict.py # noqa
- with PathManager.open(instance_id_file, "rb") as f:
- inst_image = np.asarray(Image.open(f), order="F")
- # ids < 24 are stuff labels (filtering them first is about 5% faster)
- flattened_ids = np.unique(inst_image[inst_image >= 24])
-
- ret = {
- "file_name": image_file,
- "image_id": os.path.basename(image_file),
- "height": inst_image.shape[0],
- "width": inst_image.shape[1],
- }
-
- for instance_id in flattened_ids:
- # For non-crowd annotations, instance_id // 1000 is the label_id
- # Crowd annotations have <1000 instance ids
- label_id = instance_id // 1000 if instance_id >= 1000 else instance_id
- label = id2label[label_id]
- if not label.hasInstances or label.ignoreInEval:
- continue
-
- anno = {}
- anno["iscrowd"] = instance_id < 1000
- anno["category_id"] = label.id
-
- mask = np.asarray(inst_image == instance_id, dtype=np.uint8, order="F")
-
- inds = np.nonzero(mask)
- ymin, ymax = inds[0].min(), inds[0].max()
- xmin, xmax = inds[1].min(), inds[1].max()
- anno["bbox"] = (xmin, ymin, xmax, ymax)
- if xmax <= xmin or ymax <= ymin:
- continue
- anno["bbox_mode"] = BoxMode.XYXY_ABS
- if to_polygons:
- # This conversion comes from D4809743 and D5171122,
- # when Mask-RCNN was first developed.
- contours = cv2.findContours(mask.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[
- -2
- ]
- polygons = [c.reshape(-1).tolist() for c in contours if len(c) >= 3]
- # opencv's can produce invalid polygons
- if len(polygons) == 0:
- continue
- anno["segmentation"] = polygons
- else:
- anno["segmentation"] = mask_util.encode(mask[:, :, None])[0]
- annos.append(anno)
- ret["annotations"] = annos
- return ret
-
-
-if __name__ == "__main__":
- """
- Test the cityscapes dataset loader.
-
- Usage:
- python -m detectron2.data.datasets.cityscapes \
- cityscapes/leftImg8bit/train cityscapes/gtFine/train
- """
- import argparse
-
- parser = argparse.ArgumentParser()
- parser.add_argument("image_dir")
- parser.add_argument("gt_dir")
- parser.add_argument("--type", choices=["instance", "semantic"], default="instance")
- args = parser.parse_args()
- from detectron2.data.catalog import Metadata
- from detectron2.utils.visualizer import Visualizer
- from cityscapesscripts.helpers.labels import labels
-
- logger = setup_logger(name=__name__)
-
- dirname = "cityscapes-data-vis"
- os.makedirs(dirname, exist_ok=True)
-
- if args.type == "instance":
- dicts = load_cityscapes_instances(
- args.image_dir, args.gt_dir, from_json=True, to_polygons=True
- )
- logger.info("Done loading {} samples.".format(len(dicts)))
-
- thing_classes = [k.name for k in labels if k.hasInstances and not k.ignoreInEval]
- meta = Metadata().set(thing_classes=thing_classes)
-
- else:
- dicts = load_cityscapes_semantic(args.image_dir, args.gt_dir)
- logger.info("Done loading {} samples.".format(len(dicts)))
-
- stuff_classes = [k.name for k in labels if k.trainId != 255]
- stuff_colors = [k.color for k in labels if k.trainId != 255]
- meta = Metadata().set(stuff_classes=stuff_classes, stuff_colors=stuff_colors)
-
- for d in dicts:
- img = np.array(Image.open(PathManager.open(d["file_name"], "rb")))
- visualizer = Visualizer(img, metadata=meta)
- vis = visualizer.draw_dataset_dict(d)
- # cv2.imshow("a", vis.get_image()[:, :, ::-1])
- # cv2.waitKey()
- fpath = os.path.join(dirname, os.path.basename(d["file_name"]))
- vis.save(fpath)
diff --git a/spaces/Thaweewat/ControlNet-Architecture/annotator/openpose/util.py b/spaces/Thaweewat/ControlNet-Architecture/annotator/openpose/util.py
deleted file mode 100644
index 6f91ae0e65abaf0cbd62d803f56498991141e61b..0000000000000000000000000000000000000000
--- a/spaces/Thaweewat/ControlNet-Architecture/annotator/openpose/util.py
+++ /dev/null
@@ -1,164 +0,0 @@
-import math
-import numpy as np
-import matplotlib
-import cv2
-
-
-def padRightDownCorner(img, stride, padValue):
- h = img.shape[0]
- w = img.shape[1]
-
- pad = 4 * [None]
- pad[0] = 0 # up
- pad[1] = 0 # left
- pad[2] = 0 if (h % stride == 0) else stride - (h % stride) # down
- pad[3] = 0 if (w % stride == 0) else stride - (w % stride) # right
-
- img_padded = img
- pad_up = np.tile(img_padded[0:1, :, :]*0 + padValue, (pad[0], 1, 1))
- img_padded = np.concatenate((pad_up, img_padded), axis=0)
- pad_left = np.tile(img_padded[:, 0:1, :]*0 + padValue, (1, pad[1], 1))
- img_padded = np.concatenate((pad_left, img_padded), axis=1)
- pad_down = np.tile(img_padded[-2:-1, :, :]*0 + padValue, (pad[2], 1, 1))
- img_padded = np.concatenate((img_padded, pad_down), axis=0)
- pad_right = np.tile(img_padded[:, -2:-1, :]*0 + padValue, (1, pad[3], 1))
- img_padded = np.concatenate((img_padded, pad_right), axis=1)
-
- return img_padded, pad
-
-# transfer caffe model to pytorch which will match the layer name
-def transfer(model, model_weights):
- transfered_model_weights = {}
- for weights_name in model.state_dict().keys():
- transfered_model_weights[weights_name] = model_weights['.'.join(weights_name.split('.')[1:])]
- return transfered_model_weights
-
-# draw the body keypoint and lims
-def draw_bodypose(canvas, candidate, subset):
- stickwidth = 4
- limbSeq = [[2, 3], [2, 6], [3, 4], [4, 5], [6, 7], [7, 8], [2, 9], [9, 10], \
- [10, 11], [2, 12], [12, 13], [13, 14], [2, 1], [1, 15], [15, 17], \
- [1, 16], [16, 18], [3, 17], [6, 18]]
-
- colors = [[255, 0, 0], [255, 85, 0], [255, 170, 0], [255, 255, 0], [170, 255, 0], [85, 255, 0], [0, 255, 0], \
- [0, 255, 85], [0, 255, 170], [0, 255, 255], [0, 170, 255], [0, 85, 255], [0, 0, 255], [85, 0, 255], \
- [170, 0, 255], [255, 0, 255], [255, 0, 170], [255, 0, 85]]
- for i in range(18):
- for n in range(len(subset)):
- index = int(subset[n][i])
- if index == -1:
- continue
- x, y = candidate[index][0:2]
- cv2.circle(canvas, (int(x), int(y)), 4, colors[i], thickness=-1)
- for i in range(17):
- for n in range(len(subset)):
- index = subset[n][np.array(limbSeq[i]) - 1]
- if -1 in index:
- continue
- cur_canvas = canvas.copy()
- Y = candidate[index.astype(int), 0]
- X = candidate[index.astype(int), 1]
- mX = np.mean(X)
- mY = np.mean(Y)
- length = ((X[0] - X[1]) ** 2 + (Y[0] - Y[1]) ** 2) ** 0.5
- angle = math.degrees(math.atan2(X[0] - X[1], Y[0] - Y[1]))
- polygon = cv2.ellipse2Poly((int(mY), int(mX)), (int(length / 2), stickwidth), int(angle), 0, 360, 1)
- cv2.fillConvexPoly(cur_canvas, polygon, colors[i])
- canvas = cv2.addWeighted(canvas, 0.4, cur_canvas, 0.6, 0)
- # plt.imsave("preview.jpg", canvas[:, :, [2, 1, 0]])
- # plt.imshow(canvas[:, :, [2, 1, 0]])
- return canvas
-
-
-# image drawed by opencv is not good.
-def draw_handpose(canvas, all_hand_peaks, show_number=False):
- edges = [[0, 1], [1, 2], [2, 3], [3, 4], [0, 5], [5, 6], [6, 7], [7, 8], [0, 9], [9, 10], \
- [10, 11], [11, 12], [0, 13], [13, 14], [14, 15], [15, 16], [0, 17], [17, 18], [18, 19], [19, 20]]
-
- for peaks in all_hand_peaks:
- for ie, e in enumerate(edges):
- if np.sum(np.all(peaks[e], axis=1)==0)==0:
- x1, y1 = peaks[e[0]]
- x2, y2 = peaks[e[1]]
- cv2.line(canvas, (x1, y1), (x2, y2), matplotlib.colors.hsv_to_rgb([ie/float(len(edges)), 1.0, 1.0])*255, thickness=2)
-
- for i, keyponit in enumerate(peaks):
- x, y = keyponit
- cv2.circle(canvas, (x, y), 4, (0, 0, 255), thickness=-1)
- if show_number:
- cv2.putText(canvas, str(i), (x, y), cv2.FONT_HERSHEY_SIMPLEX, 0.3, (0, 0, 0), lineType=cv2.LINE_AA)
- return canvas
-
-# detect hand according to body pose keypoints
-# please refer to https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/src/openpose/hand/handDetector.cpp
-def handDetect(candidate, subset, oriImg):
- # right hand: wrist 4, elbow 3, shoulder 2
- # left hand: wrist 7, elbow 6, shoulder 5
- ratioWristElbow = 0.33
- detect_result = []
- image_height, image_width = oriImg.shape[0:2]
- for person in subset.astype(int):
- # if any of three not detected
- has_left = np.sum(person[[5, 6, 7]] == -1) == 0
- has_right = np.sum(person[[2, 3, 4]] == -1) == 0
- if not (has_left or has_right):
- continue
- hands = []
- #left hand
- if has_left:
- left_shoulder_index, left_elbow_index, left_wrist_index = person[[5, 6, 7]]
- x1, y1 = candidate[left_shoulder_index][:2]
- x2, y2 = candidate[left_elbow_index][:2]
- x3, y3 = candidate[left_wrist_index][:2]
- hands.append([x1, y1, x2, y2, x3, y3, True])
- # right hand
- if has_right:
- right_shoulder_index, right_elbow_index, right_wrist_index = person[[2, 3, 4]]
- x1, y1 = candidate[right_shoulder_index][:2]
- x2, y2 = candidate[right_elbow_index][:2]
- x3, y3 = candidate[right_wrist_index][:2]
- hands.append([x1, y1, x2, y2, x3, y3, False])
-
- for x1, y1, x2, y2, x3, y3, is_left in hands:
- # pos_hand = pos_wrist + ratio * (pos_wrist - pos_elbox) = (1 + ratio) * pos_wrist - ratio * pos_elbox
- # handRectangle.x = posePtr[wrist*3] + ratioWristElbow * (posePtr[wrist*3] - posePtr[elbow*3]);
- # handRectangle.y = posePtr[wrist*3+1] + ratioWristElbow * (posePtr[wrist*3+1] - posePtr[elbow*3+1]);
- # const auto distanceWristElbow = getDistance(poseKeypoints, person, wrist, elbow);
- # const auto distanceElbowShoulder = getDistance(poseKeypoints, person, elbow, shoulder);
- # handRectangle.width = 1.5f * fastMax(distanceWristElbow, 0.9f * distanceElbowShoulder);
- x = x3 + ratioWristElbow * (x3 - x2)
- y = y3 + ratioWristElbow * (y3 - y2)
- distanceWristElbow = math.sqrt((x3 - x2) ** 2 + (y3 - y2) ** 2)
- distanceElbowShoulder = math.sqrt((x2 - x1) ** 2 + (y2 - y1) ** 2)
- width = 1.5 * max(distanceWristElbow, 0.9 * distanceElbowShoulder)
- # x-y refers to the center --> offset to topLeft point
- # handRectangle.x -= handRectangle.width / 2.f;
- # handRectangle.y -= handRectangle.height / 2.f;
- x -= width / 2
- y -= width / 2 # width = height
- # overflow the image
- if x < 0: x = 0
- if y < 0: y = 0
- width1 = width
- width2 = width
- if x + width > image_width: width1 = image_width - x
- if y + width > image_height: width2 = image_height - y
- width = min(width1, width2)
- # the max hand box value is 20 pixels
- if width >= 20:
- detect_result.append([int(x), int(y), int(width), is_left])
-
- '''
- return value: [[x, y, w, True if left hand else False]].
- width=height since the network require squared input.
- x, y is the coordinate of top left
- '''
- return detect_result
-
-# get max index of 2d array
-def npmax(array):
- arrayindex = array.argmax(1)
- arrayvalue = array.max(1)
- i = arrayvalue.argmax()
- j = arrayindex[i]
- return i, j
diff --git a/spaces/TheFunniestValentine/rp/Dockerfile b/spaces/TheFunniestValentine/rp/Dockerfile
deleted file mode 100644
index 6953fc05439efb70991552cf56f28365b5b6c15b..0000000000000000000000000000000000000000
--- a/spaces/TheFunniestValentine/rp/Dockerfile
+++ /dev/null
@@ -1,11 +0,0 @@
-FROM node:18
-
-WORKDIR /app
-
-RUN npm install express express-http-proxy
-
-COPY . .
-
-EXPOSE 7860
-
-CMD [ "node", "server.js" ]
\ No newline at end of file
diff --git a/spaces/Theivaprakasham/yolov6/yolov6/utils/envs.py b/spaces/Theivaprakasham/yolov6/yolov6/utils/envs.py
deleted file mode 100644
index 10159a9484ed525ad5ef3826ec3db4bf70b4c9cc..0000000000000000000000000000000000000000
--- a/spaces/Theivaprakasham/yolov6/yolov6/utils/envs.py
+++ /dev/null
@@ -1,54 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-import os
-import random
-import numpy as np
-
-import torch
-import torch.backends.cudnn as cudnn
-from yolov6.utils.events import LOGGER
-
-
-def get_envs():
- """Get PyTorch needed environments from system envirionments."""
- local_rank = int(os.getenv('LOCAL_RANK', -1))
- rank = int(os.getenv('RANK', -1))
- world_size = int(os.getenv('WORLD_SIZE', 1))
- return local_rank, rank, world_size
-
-
-def select_device(device):
- """Set devices' information to the program.
- Args:
- device: a string, like 'cpu' or '1,2,3,4'
- Returns:
- torch.device
- """
- if device == 'cpu':
- os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
- LOGGER.info('Using CPU for training... ')
- elif device:
- os.environ['CUDA_VISIBLE_DEVICES'] = device
- assert torch.cuda.is_available()
- nd = len(device.strip().split(','))
- LOGGER.info(f'Using {nd} GPU for training... ')
- cuda = device != 'cpu' and torch.cuda.is_available()
- device = torch.device('cuda:0' if cuda else 'cpu')
- return device
-
-
-def set_random_seed(seed, deterministic=False):
- """ Set random state to random libray, numpy, torch and cudnn.
- Args:
- seed: int value.
- deterministic: bool value.
- """
- random.seed(seed)
- np.random.seed(seed)
- torch.manual_seed(seed)
- if deterministic:
- cudnn.deterministic = True
- cudnn.benchmark = False
- else:
- cudnn.deterministic = False
- cudnn.benchmark = True
diff --git a/spaces/ThirdEyeData/Customer-Conversion-Prediction/supv/regress.py b/spaces/ThirdEyeData/Customer-Conversion-Prediction/supv/regress.py
deleted file mode 100644
index 63153db65a2b13f170f04cb886190ebe374e729f..0000000000000000000000000000000000000000
--- a/spaces/ThirdEyeData/Customer-Conversion-Prediction/supv/regress.py
+++ /dev/null
@@ -1,253 +0,0 @@
-#!/usr/local/bin/python3
-
-# avenir-python: Machine Learning
-# Author: Pranab Ghosh
-#
-# Licensed under the Apache License, Version 2.0 (the "License"); you
-# may not use this file except in compliance with the License. You may
-# obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-# implied. See the License for the specific language governing
-# permissions and limitations under the License.
-
-# Package imports
-import os
-import sys
-import matplotlib.pyplot as plt
-import numpy as np
-import sklearn as sk
-import matplotlib
-import random
-import jprops
-from io import StringIO
-from sklearn.model_selection import cross_val_score
-import joblib
-from random import randint
-from io import StringIO
-from sklearn.linear_model import LinearRegression
-sys.path.append(os.path.abspath("../lib"))
-from util import *
-from mlutil import *
-from pasearch import *
-
-class BaseRegressor(object):
- """
- base regression class
- """
-
- def __init__(self, configFile, defValues):
- """
- intializer
- """
- defValues["common.mode"] = ("train", None)
- defValues["common.model.directory"] = ("model", None)
- defValues["common.model.file"] = (None, None)
- defValues["common.scale.file.path"] = (None, "missing scale file path")
- defValues["common.preprocessing"] = (None, None)
- defValues["common.verbose"] = (False, None)
- defValues["train.data.file"] = (None, "missing training data file")
- defValues["train.data.fields"] = (None, "missing training data field ordinals")
- defValues["train.data.feature.fields"] = (None, "missing training data feature field ordinals")
- defValues["train.data.out.field"] = (None, "missing out field ordinal")
-
- self.config = Configuration(configFile, defValues)
- self.featData = None
- self.outData = None
- self.regressor = None
- self.verbose = self.config.getBooleanConfig("common.verbose")[0]
- self.mode = self.config.getBooleanConfig("common.mode")[0]
- logFilePath = self.config.getStringConfig("common.logging.file")[0]
- logLevName = self.config.getStringConfig("common.logging.level")[0]
- self.logger = createLogger(__name__, logFilePath, logLevName)
- self.logger.info("********* starting session")
-
- def initConfig(self, configFile, defValues):
- """
- initialize config
- """
- self.config = Configuration(configFile, defValues)
-
- def getConfig(self):
- """
- get config object
- """
- return self.config
-
- def setConfigParam(self, name, value):
- """
- set config param
- """
- self.config.setParam(name, value)
-
- def getMode(self):
- """
- get mode
- """
- return self.mode
-
- def train(self):
- """
- train model
- """
- #build model
- self.buildModel()
-
- # training data
- if self.featData is None:
- (featData, outData) = self.prepData("train")
- (self.featData, self.outData) = (featData, outData)
- else:
- (featData, outData) = (self.featData, self.outData)
-
- # parameters
- modelSave = self.config.getBooleanConfig("train.model.save")[0]
-
- #train
- self.logger.info("...training model")
- self.regressor.fit(featData, outData)
- rsqScore = self.regressor.score(featData, outData)
- coef = self.regressor.coef_
- intc = self.regressor.intercept_
- result = (rsqScore, intc, coef)
-
- if modelSave:
- self.logger.info("...saving model")
- modelFilePath = self.getModelFilePath()
- joblib.dump(self.regressor, modelFilePath)
- return result
-
- def validate(self):
- # create model
- self.prepModel()
-
- # prepare test data
- (featData, outDataActual) = self.prepData("validate")
-
- #predict
- self.logger.info("...predicting")
- outDataPred = self.regressor.predict(featData)
-
- #error
- rsqScore = self.regressor.score(featData, outDataActual)
- result = (outDataPred, rsqScore)
- return result
-
- def predict(self):
- """
- predict using trained model
- """
- # create model
- self.prepModel()
-
- # prepare test data
- featData = self.prepData("predict")[0]
-
- #predict
- self.logger.info("...predicting")
- outData = self.regressor.predict(featData)
- return outData
-
- def prepData(self, mode):
- """
- loads and prepares data for training and validation
- """
- # parameters
- key = mode + ".data.file"
- dataFile = self.config.getStringConfig(key)[0]
-
- key = mode + ".data.fields"
- fieldIndices = self.config.getStringConfig(key)[0]
- if not fieldIndices is None:
- fieldIndices = strToIntArray(fieldIndices, ",")
-
-
- key = mode + ".data.feature.fields"
- featFieldIndices = self.config.getStringConfig(key)[0]
- if not featFieldIndices is None:
- featFieldIndices = strToIntArray(featFieldIndices, ",")
-
- if not mode == "predict":
- key = mode + ".data.out.field"
- outFieldIndex = self.config.getIntConfig(key)[0]
-
- #load data
- (data, featData) = loadDataFile(dataFile, ",", fieldIndices, featFieldIndices)
- if (self.config.getStringConfig("common.preprocessing")[0] == "scale"):
- featData = sk.preprocessing.scale(featData)
- outData = None
- if not mode == "predict":
- outData = extrColumns(data, outFieldIndex)
- return (featData, outData)
-
- def prepModel(self):
- """
- load saved model or train model
- """
- useSavedModel = self.config.getBooleanConfig("predict.use.saved.model")[0]
- if (useSavedModel and not self.regressor):
- # load saved model
- self.logger.info("...loading saved model")
- modelFilePath = self.getModelFilePath()
- self.regressor = joblib.load(modelFilePath)
- else:
- # train model
- self.train()
-
-class LinearRegressor(BaseRegressor):
- """
- linear regression
- """
- def __init__(self, configFile):
- defValues = {}
- defValues["train.normalize"] = (False, None)
-
- super(LinearRegressor, self).__init__(configFile, defValues)
-
- def buildModel(self):
- """
- builds model object
- """
- self.logger.info("...building linear regression model")
- normalize = self.config.getBooleanConfig("train.normalize")[0]
- self.regressor = LinearRegression(normalize=normalize)
-
-class ElasticNetRegressor(BaseRegressor):
- """
- elastic net regression
- """
- def __init__(self, configFile):
- defValues = {}
- defValues["train.alpha"] = (1.0, None)
- defValues["train.loneratio"] = (0.5, None)
- defValues["train.normalize"] = (False, None)
- defValues["train.precompute"] = (False, None)
- defValues["train.max.iter"] = (1000, None)
- defValues["train.tol"] = (0.0001, None)
- defValues["train.random.state"] = (None, None)
- defValues["train.selection"] = ("cyclic", None)
-
- super(ElasticNetRegressor, self).__init__(configFile, defValues)
-
- def buildModel(self):
- """
- builds model object
- """
- self.logger.info("...building elastic net regression model")
- alpha = self.config.getFloatConfig("train.alpha")[0]
- loneratio = self.config.getFloatConfig("train.loneratio")[0]
- normalize = self.config.getBooleanConfig("train.normalize")[0]
- precompute = self.config.getBooleanConfig("train.precompute")[0]
- maxIter = self.config.getIntConfig("train.max.iter")[0]
- tol = self.config.getFloatConfig("train.tol")[0]
- randState = self.config.getIntConfig("train.random.state")[0]
- selection = self.config.getIntConfig("train.selection")[0]
-
- self.regressor = ElasticNet(alpha=alpha, l1_ratio=loneratio, normalize=normalize, precompute=precompute,
- max_iter=maxIter, tol=tol, random_state=randState, selection=selection)
-
-
diff --git a/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/config/GroundingDINO_SwinB_cfg.py b/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/config/GroundingDINO_SwinB_cfg.py
deleted file mode 100644
index f490c4bbd598a35de43d36ceafcbd769e7ff21bf..0000000000000000000000000000000000000000
--- a/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/config/GroundingDINO_SwinB_cfg.py
+++ /dev/null
@@ -1,43 +0,0 @@
-batch_size = 1
-modelname = "groundingdino"
-backbone = "swin_B_384_22k"
-position_embedding = "sine"
-pe_temperatureH = 20
-pe_temperatureW = 20
-return_interm_indices = [1, 2, 3]
-backbone_freeze_keywords = None
-enc_layers = 6
-dec_layers = 6
-pre_norm = False
-dim_feedforward = 2048
-hidden_dim = 256
-dropout = 0.0
-nheads = 8
-num_queries = 900
-query_dim = 4
-num_patterns = 0
-num_feature_levels = 4
-enc_n_points = 4
-dec_n_points = 4
-two_stage_type = "standard"
-two_stage_bbox_embed_share = False
-two_stage_class_embed_share = False
-transformer_activation = "relu"
-dec_pred_bbox_embed_share = True
-dn_box_noise_scale = 1.0
-dn_label_noise_ratio = 0.5
-dn_label_coef = 1.0
-dn_bbox_coef = 1.0
-embed_init_tgt = True
-dn_labelbook_size = 2000
-max_text_len = 256
-text_encoder_type = "bert-base-uncased"
-use_text_enhancer = True
-use_fusion_layer = True
-use_checkpoint = True
-use_transformer_ckpt = True
-use_text_cross_attention = True
-text_dropout = 0.0
-fusion_dropout = 0.0
-fusion_droppath = 0.1
-sub_sentence_present = True
diff --git a/spaces/WayneLinn/Singapore_Air_Quality_Prediction/app.py b/spaces/WayneLinn/Singapore_Air_Quality_Prediction/app.py
deleted file mode 100644
index 537b866d27e27cd24cf4e903dbc12f9e27ec4079..0000000000000000000000000000000000000000
--- a/spaces/WayneLinn/Singapore_Air_Quality_Prediction/app.py
+++ /dev/null
@@ -1,59 +0,0 @@
-import gradio as gr
-import numpy as np
-from PIL import Image
-import requests
-import os
-import plotly.express as px
-import pandas as pd
-
-import hopsworks
-import joblib
-
-project = hopsworks.login(project="test42",api_key_value=os.environ.get("HOPSWORKS_API_KEYS"))
-fs = project.get_feature_store()
-dataset_api = project.get_dataset_api()
-dataset_api.download("Resources/aqi_results.csv",overwrite=True)
-aqi = pd.read_csv('aqi_results.csv')
-
-'''
-def update():
- dataset_api.download("Resources/aqi_results.csv")
- aqi = pd.read_csv('aqi_results.csv')
- return aqi
-
-with gr.Blocks() as demo:
- gr.Markdown("Air Quality Index Prediction")
- with gr.Row():
- with gr.Column():
- gr.Label("Predicted AQI in next 7 days in Singapore")
- out = gr.Dataframe()
- btn = gr.Button("Refresh")
- btn.click(fn=update, inputs=None, outputs=out)
-'''
-
-
-
-
-def plotly_plot():
- # prepare some data
- dataset_api.download("Resources/aqi_results.csv",overwrite=True)
- aqi = pd.read_csv('aqi_results.csv')
-
- x = list(aqi['datetime'])
- y = list(aqi['aqi'])
-
- data = pd.DataFrame()
- data['Datetime'] = x
- data['AQI'] = y
- # create a new plot
- p = px.bar(data, x='Datetime', y='AQI')
-
- return p
-
-# show the results
-outputs = gr.Plot()
-
-demo1 = gr.Interface(fn=plotly_plot, inputs=None, outputs=outputs)
-
-
-demo1.launch()
diff --git a/spaces/WhyLIM/ChatGPT-academic/crazy_functions/test_project/latex/attention/background.tex b/spaces/WhyLIM/ChatGPT-academic/crazy_functions/test_project/latex/attention/background.tex
deleted file mode 100644
index 785069dc0f9143bad24e640056dd1072d5c6e5b5..0000000000000000000000000000000000000000
--- a/spaces/WhyLIM/ChatGPT-academic/crazy_functions/test_project/latex/attention/background.tex
+++ /dev/null
@@ -1,58 +0,0 @@
-The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU \citep{extendedngpu}, ByteNet \citep{NalBytenet2017} and ConvS2S \citep{JonasFaceNet2017}, all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions \citep{hochreiter2001gradient}. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section~\ref{sec:attention}.
-
-Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations \citep{cheng2016long, decomposableAttnModel, paulus2017deep, lin2017structured}.
-
-End-to-end memory networks are based on a recurrent attention mechanism instead of sequence-aligned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks \citep{sukhbaatar2015}.
-
-To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence-aligned RNNs or convolution.
-In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as \citep{neural_gpu, NalBytenet2017} and \citep{JonasFaceNet2017}.
-
-
-%\citep{JonasFaceNet2017} report new SOTA on machine translation for English-to-German (EnDe), Enlish-to-French (EnFr) and English-to-Romanian language pairs.
-
-%For example,! in MT, we must draw information from both input and previous output words to translate an output word accurately. An attention layer \citep{bahdanau2014neural} can connect a very large number of positions at low computation cost, making it an essential ingredient in competitive recurrent models for machine translation.
-
-%A natural question to ask then is, "Could we replace recurrence with attention?". \marginpar{Don't know if it's the most natural question to ask given the previous statements. Also, need to say that the complexity table summarizes these statements} Such a model would be blessed with the computational efficiency of attention and the power of cross-positional communication. In this work, show that pure attention models work remarkably well for MT, achieving new SOTA results on EnDe and EnFr, and can be trained in under $2$ days on xyz architecture.
-
-%After the seminal models introduced in \citep{sutskever14, bahdanau2014neural, cho2014learning}, recurrent models have become the dominant solution for both sequence modeling and sequence-to-sequence transduction. Many efforts such as \citep{wu2016google,luong2015effective,jozefowicz2016exploring} have pushed the boundaries of machine translation (MT) and language modeling with recurrent endoder-decoder and recurrent language models. Recent effort \citep{shazeer2017outrageously} has successfully combined the power of conditional computation with sequence models to train very large models for MT, pushing SOTA at lower computational cost.
-
-%Recurrent models compute a vector of hidden states $h_t$, for each time step $t$ of computation. $h_t$ is a function of both the input at time $t$ and the previous hidden state $h_t$. This dependence on the previous hidden state precludes processing all timesteps at once, instead requiring long sequences of sequential operations. In practice, this results in greatly reduced computational efficiency, as on modern computing hardware, a single operation on a large batch is much faster than a large number of operations on small batches. The problem gets worse at longer sequence lengths. Although sequential computation is not a severe bottleneck at inference time, as autoregressively generating each output requires all previous outputs, the inability to compute scores at all output positions at once hinders us from rapidly training our models over large datasets. Although impressive work such as \citep{Kuchaiev2017Factorization} is able to significantly accelerate the training of LSTMs with factorization tricks, we are still bound by the linear dependence on sequence length.
-
-%If the model could compute hidden states at each time step using only the inputs and outputs, it would be liberated from the dependence on results from previous time steps during training. This line of thought is the foundation of recent efforts such as the Markovian neural GPU \citep{neural_gpu}, ByteNet \citep{NalBytenet2017} and ConvS2S \citep{JonasFaceNet2017}, all of which use convolutional neural networks as a building block to compute hidden representations simultaneously for all timesteps, resulting in $O(1)$ sequential time complexity. \citep{JonasFaceNet2017} report new SOTA on machine translation for English-to-German (EnDe), Enlish-to-French (EnFr) and English-to-Romanian language pairs.
-
-%A crucial component for accurate sequence prediction is modeling cross-positional communication. For example, in MT, we must draw information from both input and previous output words to translate an output word accurately. An attention layer \citep{bahdanau2014neural} can connect a very large number of positions at a low computation cost, also $O(1)$ sequential time complexity, making it an essential ingredient in recurrent encoder-decoder architectures for MT. A natural question to ask then is, "Could we replace recurrence with attention?". \marginpar{Don't know if it's the most natural question to ask given the previous statements. Also, need to say that the complexity table summarizes these statements} Such a model would be blessed with the computational efficiency of attention and the power of cross-positional communication. In this work, show that pure attention models work remarkably well for MT, achieving new SOTA results on EnDe and EnFr, and can be trained in under $2$ days on xyz architecture.
-
-
-
-%Note: Facebook model is no better than RNNs in this regard, since it requires a number of layers proportional to the distance you want to communicate. Bytenet is more promising, since it requires a logarithmnic number of layers (does bytenet have SOTA results)?
-
-%Note: An attention layer can connect a very large number of positions at a low computation cost in O(1) sequential operations. This is why encoder-decoder attention has been so successful in seq-to-seq models so far. It is only natural, then, to also use attention to connect the timesteps of the same sequence.
-
-%Note: I wouldn't say that long sequences are not a problem during inference. It would be great if we could infer with no long sequences. We could just say later on that, while our training graph is constant-depth, our model still requires sequential operations in the decoder part during inference due to the autoregressive nature of the model.
-
-%\begin{table}[h!]
-%\caption{Attention models are quite efficient for cross-positional communications when sequence length is smaller than channel depth. $n$ represents the sequence length and $d$ represents the channel depth.}
-%\label{tab:op_complexities}
-%\begin{center}
-%\vspace{-5pt}
-%\scalebox{0.75}{
-
-%\begin{tabular}{l|c|c|c}
-%\hline \hline
-%Layer Type & Receptive & Complexity & Sequential \\
-% & Field & & Operations \\
-%\hline
-%Pointwise Feed-Forward & $1$ & $O(n \cdot d^2)$ & $O(1)$ \\
-%\hline
-%Recurrent & $n$ & $O(n \cdot d^2)$ & $O(n)$ \\
-%\hline
-%Convolutional & $r$ & $O(r \cdot n \cdot d^2)$ & $O(1)$ \\
-%\hline
-%Convolutional (separable) & $r$ & $O(r \cdot n \cdot d + n %\cdot d^2)$ & $O(1)$ \\
-%\hline
-%Attention & $r$ & $O(r \cdot n \cdot d)$ & $O(1)$ \\
-%\hline \hline
-%\end{tabular}
-%}
-%\end{center}
-%\end{table}
\ No newline at end of file
diff --git a/spaces/XzJosh/Lumi-Bert-VITS2/preprocess_text.py b/spaces/XzJosh/Lumi-Bert-VITS2/preprocess_text.py
deleted file mode 100644
index 5eb0f3b9e929fcbe91dcbeb653391227a2518a15..0000000000000000000000000000000000000000
--- a/spaces/XzJosh/Lumi-Bert-VITS2/preprocess_text.py
+++ /dev/null
@@ -1,64 +0,0 @@
-import json
-from random import shuffle
-
-import tqdm
-from text.cleaner import clean_text
-from collections import defaultdict
-stage = [1,2,3]
-
-transcription_path = 'filelists/genshin.list'
-train_path = 'filelists/train.list'
-val_path = 'filelists/val.list'
-config_path = "configs/config.json"
-val_per_spk = 4
-max_val_total = 8
-
-if 1 in stage:
- with open( transcription_path+'.cleaned', 'w', encoding='utf-8') as f:
- for line in tqdm.tqdm(open(transcription_path, encoding='utf-8').readlines()):
- try:
- utt, spk, language, text = line.strip().split('|')
- norm_text, phones, tones, word2ph = clean_text(text, language)
- f.write('{}|{}|{}|{}|{}|{}|{}\n'.format(utt, spk, language, norm_text, ' '.join(phones),
- " ".join([str(i) for i in tones]),
- " ".join([str(i) for i in word2ph])))
- except Exception as error :
- print("err!", utt, error)
-
-if 2 in stage:
- spk_utt_map = defaultdict(list)
- spk_id_map = {}
- current_sid = 0
-
- with open( transcription_path+'.cleaned', encoding='utf-8') as f:
- for line in f.readlines():
- utt, spk, language, text, phones, tones, word2ph = line.strip().split('|')
- spk_utt_map[spk].append(line)
- if spk not in spk_id_map.keys():
- spk_id_map[spk] = current_sid
- current_sid += 1
- train_list = []
- val_list = []
-
- for spk, utts in spk_utt_map.items():
- shuffle(utts)
- val_list+=utts[:val_per_spk]
- train_list+=utts[val_per_spk:]
- if len(val_list) > max_val_total:
- train_list+=val_list[max_val_total:]
- val_list = val_list[:max_val_total]
-
- with open( train_path,"w", encoding='utf-8') as f:
- for line in train_list:
- f.write(line)
-
- with open(val_path, "w", encoding='utf-8') as f:
- for line in val_list:
- f.write(line)
-
-if 3 in stage:
- assert 2 in stage
- config = json.load(open(config_path, encoding='utf-8'))
- config["data"]['spk2id'] = spk_id_map
- with open(config_path, 'w', encoding='utf-8') as f:
- json.dump(config, f, indent=2, ensure_ascii=False)
diff --git a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/models/unet_1d.py b/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/models/unet_1d.py
deleted file mode 100644
index 29d1d707f55a026458defd2bc0ec089ecc10653a..0000000000000000000000000000000000000000
--- a/spaces/YeOldHermit/Super-Resolution-Anime-Diffusion/diffusers/models/unet_1d.py
+++ /dev/null
@@ -1,245 +0,0 @@
-# Copyright 2022 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass
-from typing import Optional, Tuple, Union
-
-import torch
-import torch.nn as nn
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from ..modeling_utils import ModelMixin
-from ..utils import BaseOutput
-from .embeddings import GaussianFourierProjection, TimestepEmbedding, Timesteps
-from .unet_1d_blocks import get_down_block, get_mid_block, get_out_block, get_up_block
-
-
-@dataclass
-class UNet1DOutput(BaseOutput):
- """
- Args:
- sample (`torch.FloatTensor` of shape `(batch_size, num_channels, sample_size)`):
- Hidden states output. Output of last layer of model.
- """
-
- sample: torch.FloatTensor
-
-
-class UNet1DModel(ModelMixin, ConfigMixin):
- r"""
- UNet1DModel is a 1D UNet model that takes in a noisy sample and a timestep and returns sample shaped output.
-
- This model inherits from [`ModelMixin`]. Check the superclass documentation for the generic methods the library
- implements for all the model (such as downloading or saving, etc.)
-
- Parameters:
- sample_size (`int`, *optional*): Default length of sample. Should be adaptable at runtime.
- in_channels (`int`, *optional*, defaults to 2): Number of channels in the input sample.
- out_channels (`int`, *optional*, defaults to 2): Number of channels in the output.
- time_embedding_type (`str`, *optional*, defaults to `"fourier"`): Type of time embedding to use.
- freq_shift (`float`, *optional*, defaults to 0.0): Frequency shift for fourier time embedding.
- flip_sin_to_cos (`bool`, *optional*, defaults to :
- obj:`False`): Whether to flip sin to cos for fourier time embedding.
- down_block_types (`Tuple[str]`, *optional*, defaults to :
- obj:`("DownBlock1D", "DownBlock1DNoSkip", "AttnDownBlock1D")`): Tuple of downsample block types.
- up_block_types (`Tuple[str]`, *optional*, defaults to :
- obj:`("UpBlock1D", "UpBlock1DNoSkip", "AttnUpBlock1D")`): Tuple of upsample block types.
- block_out_channels (`Tuple[int]`, *optional*, defaults to :
- obj:`(32, 32, 64)`): Tuple of block output channels.
- mid_block_type (`str`, *optional*, defaults to "UNetMidBlock1D"): block type for middle of UNet.
- out_block_type (`str`, *optional*, defaults to `None`): optional output processing of UNet.
- act_fn (`str`, *optional*, defaults to None): optional activitation function in UNet blocks.
- norm_num_groups (`int`, *optional*, defaults to 8): group norm member count in UNet blocks.
- layers_per_block (`int`, *optional*, defaults to 1): added number of layers in a UNet block.
- downsample_each_block (`int`, *optional*, defaults to False:
- experimental feature for using a UNet without upsampling.
- """
-
- @register_to_config
- def __init__(
- self,
- sample_size: int = 65536,
- sample_rate: Optional[int] = None,
- in_channels: int = 2,
- out_channels: int = 2,
- extra_in_channels: int = 0,
- time_embedding_type: str = "fourier",
- flip_sin_to_cos: bool = True,
- use_timestep_embedding: bool = False,
- freq_shift: float = 0.0,
- down_block_types: Tuple[str] = ("DownBlock1DNoSkip", "DownBlock1D", "AttnDownBlock1D"),
- up_block_types: Tuple[str] = ("AttnUpBlock1D", "UpBlock1D", "UpBlock1DNoSkip"),
- mid_block_type: Tuple[str] = "UNetMidBlock1D",
- out_block_type: str = None,
- block_out_channels: Tuple[int] = (32, 32, 64),
- act_fn: str = None,
- norm_num_groups: int = 8,
- layers_per_block: int = 1,
- downsample_each_block: bool = False,
- ):
- super().__init__()
- self.sample_size = sample_size
-
- # time
- if time_embedding_type == "fourier":
- self.time_proj = GaussianFourierProjection(
- embedding_size=8, set_W_to_weight=False, log=False, flip_sin_to_cos=flip_sin_to_cos
- )
- timestep_input_dim = 2 * block_out_channels[0]
- elif time_embedding_type == "positional":
- self.time_proj = Timesteps(
- block_out_channels[0], flip_sin_to_cos=flip_sin_to_cos, downscale_freq_shift=freq_shift
- )
- timestep_input_dim = block_out_channels[0]
-
- if use_timestep_embedding:
- time_embed_dim = block_out_channels[0] * 4
- self.time_mlp = TimestepEmbedding(
- in_channels=timestep_input_dim,
- time_embed_dim=time_embed_dim,
- act_fn=act_fn,
- out_dim=block_out_channels[0],
- )
-
- self.down_blocks = nn.ModuleList([])
- self.mid_block = None
- self.up_blocks = nn.ModuleList([])
- self.out_block = None
-
- # down
- output_channel = in_channels
- for i, down_block_type in enumerate(down_block_types):
- input_channel = output_channel
- output_channel = block_out_channels[i]
-
- if i == 0:
- input_channel += extra_in_channels
-
- is_final_block = i == len(block_out_channels) - 1
-
- down_block = get_down_block(
- down_block_type,
- num_layers=layers_per_block,
- in_channels=input_channel,
- out_channels=output_channel,
- temb_channels=block_out_channels[0],
- add_downsample=not is_final_block or downsample_each_block,
- )
- self.down_blocks.append(down_block)
-
- # mid
- self.mid_block = get_mid_block(
- mid_block_type,
- in_channels=block_out_channels[-1],
- mid_channels=block_out_channels[-1],
- out_channels=block_out_channels[-1],
- embed_dim=block_out_channels[0],
- num_layers=layers_per_block,
- add_downsample=downsample_each_block,
- )
-
- # up
- reversed_block_out_channels = list(reversed(block_out_channels))
- output_channel = reversed_block_out_channels[0]
- if out_block_type is None:
- final_upsample_channels = out_channels
- else:
- final_upsample_channels = block_out_channels[0]
-
- for i, up_block_type in enumerate(up_block_types):
- prev_output_channel = output_channel
- output_channel = (
- reversed_block_out_channels[i + 1] if i < len(up_block_types) - 1 else final_upsample_channels
- )
-
- is_final_block = i == len(block_out_channels) - 1
-
- up_block = get_up_block(
- up_block_type,
- num_layers=layers_per_block,
- in_channels=prev_output_channel,
- out_channels=output_channel,
- temb_channels=block_out_channels[0],
- add_upsample=not is_final_block,
- )
- self.up_blocks.append(up_block)
- prev_output_channel = output_channel
-
- # out
- num_groups_out = norm_num_groups if norm_num_groups is not None else min(block_out_channels[0] // 4, 32)
- self.out_block = get_out_block(
- out_block_type=out_block_type,
- num_groups_out=num_groups_out,
- embed_dim=block_out_channels[0],
- out_channels=out_channels,
- act_fn=act_fn,
- fc_dim=block_out_channels[-1] // 4,
- )
-
- def forward(
- self,
- sample: torch.FloatTensor,
- timestep: Union[torch.Tensor, float, int],
- return_dict: bool = True,
- ) -> Union[UNet1DOutput, Tuple]:
- r"""
- Args:
- sample (`torch.FloatTensor`): `(batch_size, sample_size, num_channels)` noisy inputs tensor
- timestep (`torch.FloatTensor` or `float` or `int): (batch) timesteps
- return_dict (`bool`, *optional*, defaults to `True`):
- Whether or not to return a [`~models.unet_1d.UNet1DOutput`] instead of a plain tuple.
-
- Returns:
- [`~models.unet_1d.UNet1DOutput`] or `tuple`: [`~models.unet_1d.UNet1DOutput`] if `return_dict` is True,
- otherwise a `tuple`. When returning a tuple, the first element is the sample tensor.
- """
-
- # 1. time
- timesteps = timestep
- if not torch.is_tensor(timesteps):
- timesteps = torch.tensor([timesteps], dtype=torch.long, device=sample.device)
- elif torch.is_tensor(timesteps) and len(timesteps.shape) == 0:
- timesteps = timesteps[None].to(sample.device)
-
- timestep_embed = self.time_proj(timesteps)
- if self.config.use_timestep_embedding:
- timestep_embed = self.time_mlp(timestep_embed)
- else:
- timestep_embed = timestep_embed[..., None]
- timestep_embed = timestep_embed.repeat([1, 1, sample.shape[2]]).to(sample.dtype)
-
- # 2. down
- down_block_res_samples = ()
- for downsample_block in self.down_blocks:
- sample, res_samples = downsample_block(hidden_states=sample, temb=timestep_embed)
- down_block_res_samples += res_samples
-
- # 3. mid
- if self.mid_block:
- sample = self.mid_block(sample, timestep_embed)
-
- # 4. up
- for i, upsample_block in enumerate(self.up_blocks):
- res_samples = down_block_res_samples[-1:]
- down_block_res_samples = down_block_res_samples[:-1]
- sample = upsample_block(sample, res_hidden_states_tuple=res_samples, temb=timestep_embed)
-
- # 5. post-process
- if self.out_block:
- sample = self.out_block(sample, timestep_embed)
-
- if not return_dict:
- return (sample,)
-
- return UNet1DOutput(sample=sample)
diff --git a/spaces/Yiqin/ChatVID/model/fastchat/serve/model_worker.py b/spaces/Yiqin/ChatVID/model/fastchat/serve/model_worker.py
deleted file mode 100644
index 65aa2b726fd8de9b57bebdcd73ec4ee350f88af2..0000000000000000000000000000000000000000
--- a/spaces/Yiqin/ChatVID/model/fastchat/serve/model_worker.py
+++ /dev/null
@@ -1,268 +0,0 @@
-"""
-A model worker executes the model.
-"""
-import argparse
-import asyncio
-import dataclasses
-import logging
-import json
-import os
-import time
-from typing import List, Union
-import threading
-import uuid
-
-from fastapi import FastAPI, Request, BackgroundTasks
-from fastapi.responses import StreamingResponse
-import requests
-
-try:
- from transformers import (
- AutoTokenizer,
- AutoModelForCausalLM,
- LlamaTokenizer,
- AutoModel,
- )
-except ImportError:
- from transformers import (
- AutoTokenizer,
- AutoModelForCausalLM,
- LLaMATokenizer,
- AutoModel,
- )
-import torch
-import uvicorn
-
-from fastchat.constants import WORKER_HEART_BEAT_INTERVAL
-from fastchat.serve.inference import load_model, generate_stream
-from fastchat.serve.serve_chatglm import chatglm_generate_stream
-from fastchat.utils import build_logger, server_error_msg, pretty_print_semaphore
-
-GB = 1 << 30
-
-worker_id = str(uuid.uuid4())[:6]
-logger = build_logger("model_worker", f"model_worker_{worker_id}.log")
-global_counter = 0
-
-model_semaphore = None
-
-
-def heart_beat_worker(controller):
- while True:
- time.sleep(WORKER_HEART_BEAT_INTERVAL)
- controller.send_heart_beat()
-
-
-class ModelWorker:
- def __init__(
- self,
- controller_addr,
- worker_addr,
- worker_id,
- no_register,
- model_path,
- model_name,
- device,
- num_gpus,
- max_gpu_memory,
- load_8bit=False,
- ):
- self.controller_addr = controller_addr
- self.worker_addr = worker_addr
- self.worker_id = worker_id
- if model_path.endswith("/"):
- model_path = model_path[:-1]
- self.model_name = model_name or model_path.split("/")[-1]
- self.device = device
-
- logger.info(f"Loading the model {self.model_name} on worker {worker_id} ...")
- self.model, self.tokenizer = load_model(
- model_path, device, num_gpus, max_gpu_memory, load_8bit
- )
-
- if hasattr(self.model.config, "max_sequence_length"):
- self.context_len = self.model.config.max_sequence_length
- elif hasattr(self.model.config, "max_position_embeddings"):
- self.context_len = self.model.config.max_position_embeddings
- else:
- self.context_len = 2048
-
- is_chatglm = "chatglm" in str(type(self.model)).lower()
- if is_chatglm:
- self.generate_stream_func = chatglm_generate_stream
- else:
- self.generate_stream_func = generate_stream
-
- if not no_register:
- self.register_to_controller()
- self.heart_beat_thread = threading.Thread(
- target=heart_beat_worker, args=(self,)
- )
- self.heart_beat_thread.start()
-
- def register_to_controller(self):
- logger.info("Register to controller")
-
- url = self.controller_addr + "/register_worker"
- data = {
- "worker_name": self.worker_addr,
- "check_heart_beat": True,
- "worker_status": self.get_status(),
- }
- r = requests.post(url, json=data)
- assert r.status_code == 200
-
- def send_heart_beat(self):
- logger.info(
- f"Send heart beat. Models: {[self.model_name]}. "
- f"Semaphore: {pretty_print_semaphore(model_semaphore)}. "
- f"global_counter: {global_counter}"
- )
-
- url = self.controller_addr + "/receive_heart_beat"
-
- while True:
- try:
- ret = requests.post(
- url,
- json={
- "worker_name": self.worker_addr,
- "queue_length": self.get_queue_length(),
- },
- timeout=5,
- )
- exist = ret.json()["exist"]
- break
- except requests.exceptions.RequestException as e:
- logger.error(f"heart beat error: {e}")
- time.sleep(5)
-
- if not exist:
- self.register_to_controller()
-
- def get_queue_length(self):
- if (
- model_semaphore is None
- or model_semaphore._value is None
- or model_semaphore._waiters is None
- ):
- return 0
- else:
- return (
- args.limit_model_concurrency
- - model_semaphore._value
- + len(model_semaphore._waiters)
- )
-
- def get_status(self):
- return {
- "model_names": [self.model_name],
- "speed": 1,
- "queue_length": self.get_queue_length(),
- }
-
- def generate_stream_gate(self, params):
- try:
- for output in self.generate_stream_func(
- self.model,
- self.tokenizer,
- params,
- self.device,
- self.context_len,
- args.stream_interval,
- ):
- ret = {
- "text": output,
- "error_code": 0,
- }
- yield json.dumps(ret).encode() + b"\0"
- except torch.cuda.OutOfMemoryError:
- ret = {
- "text": server_error_msg,
- "error_code": 1,
- }
- yield json.dumps(ret).encode() + b"\0"
-
-
-app = FastAPI()
-
-
-def release_model_semaphore():
- model_semaphore.release()
-
-
-@app.post("/worker_generate_stream")
-async def api_generate_stream(request: Request):
- global model_semaphore, global_counter
- global_counter += 1
- params = await request.json()
-
- if model_semaphore is None:
- model_semaphore = asyncio.Semaphore(args.limit_model_concurrency)
- await model_semaphore.acquire()
- generator = worker.generate_stream_gate(params)
- background_tasks = BackgroundTasks()
- background_tasks.add_task(release_model_semaphore)
- return StreamingResponse(generator, background=background_tasks)
-
-
-@app.post("/worker_get_status")
-async def api_get_status(request: Request):
- return worker.get_status()
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
- parser.add_argument("--host", type=str, default="localhost")
- parser.add_argument("--port", type=int, default=21002)
- parser.add_argument("--worker-address", type=str, default="http://localhost:21002")
- parser.add_argument(
- "--controller-address", type=str, default="http://localhost:21001"
- )
- parser.add_argument(
- "--model-path",
- type=str,
- default="facebook/opt-350m",
- help="The path to the weights",
- )
- parser.add_argument("--model-name", type=str, help="Optional name")
- parser.add_argument(
- "--device", type=str, choices=["cpu", "cuda", "mps"], default="cuda"
- )
- parser.add_argument("--num-gpus", type=int, default=1)
- parser.add_argument(
- "--gpus",
- type=str,
- default=None,
- help="A single GPU like 1 or multiple GPUs like 0,2"
- )
- parser.add_argument(
- "--max-gpu-memory",
- type=str,
- help="The maximum memory per gpu. Use a string like '13Gib'",
- )
- parser.add_argument("--load-8bit", action="store_true")
- parser.add_argument("--limit-model-concurrency", type=int, default=5)
- parser.add_argument("--stream-interval", type=int, default=2)
- parser.add_argument("--no-register", action="store_true")
- args = parser.parse_args()
- logger.info(f"args: {args}")
-
- if args.gpus:
- if args.num_gpus and len(args.gpus.split(",")) < int(args.num_gpus):
- raise ValueError(f"Larger --num-gpus ({args.num_gpus}) than --gpus {args.gpus}!")
- os.environ["CUDA_VISIBLE_DEVICES"] = args.gpus
-
- worker = ModelWorker(
- args.controller_address,
- args.worker_address,
- worker_id,
- args.no_register,
- args.model_path,
- args.model_name,
- args.device,
- args.num_gpus,
- args.max_gpu_memory,
- args.load_8bit,
- )
- uvicorn.run(app, host=args.host, port=args.port, log_level="info")
diff --git a/spaces/YlcldKlns/bing/src/components/header.tsx b/spaces/YlcldKlns/bing/src/components/header.tsx
deleted file mode 100644
index dc298b722154d1ac6d7a7e148204605562d6cc58..0000000000000000000000000000000000000000
--- a/spaces/YlcldKlns/bing/src/components/header.tsx
+++ /dev/null
@@ -1,12 +0,0 @@
-import * as React from 'react'
-import { UserMenu } from './user-menu'
-
-export async function Header() {
- return (
-
- )
-}
diff --git a/spaces/Yoyo1123/text_generator/app.py b/spaces/Yoyo1123/text_generator/app.py
deleted file mode 100644
index c027232bef8ec3c494d02ee67479d9b0901a6cbc..0000000000000000000000000000000000000000
--- a/spaces/Yoyo1123/text_generator/app.py
+++ /dev/null
@@ -1,11 +0,0 @@
-import gradio as gr
-from gradio.mix import Parallel
-
-title="My First Text Generator"
-description="Input text."
-
-model1=gr.Interface.load("huggingface/EleutherAI/gpt-j-6B")
-model2=gr.Interface.load("huggingface/gpt2")
-model3=gr.Interface.load("huggingface/EleutherAI/gpt-neo-125M")
-
-gr.Parallel(model1, model2 , model3 , title=title , description=description).launch()
\ No newline at end of file
diff --git a/spaces/Yuliang/ICON/lib/renderer/glm.py b/spaces/Yuliang/ICON/lib/renderer/glm.py
deleted file mode 100644
index 65b1407e6edba36ac5883166a45bbe3ad8fadcce..0000000000000000000000000000000000000000
--- a/spaces/Yuliang/ICON/lib/renderer/glm.py
+++ /dev/null
@@ -1,143 +0,0 @@
-
-# -*- coding: utf-8 -*-
-
-# Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. (MPG) is
-# holder of all proprietary rights on this computer program.
-# You can only use this computer program if you have closed
-# a license agreement with MPG or you get the right to use the computer
-# program from someone who is authorized to grant you that right.
-# Any use of the computer program without a valid license is prohibited and
-# liable to prosecution.
-#
-# Copyright©2019 Max-Planck-Gesellschaft zur Förderung
-# der Wissenschaften e.V. (MPG). acting on behalf of its Max Planck Institute
-# for Intelligent Systems. All rights reserved.
-#
-# Contact: ps-license@tuebingen.mpg.de
-
-import numpy as np
-
-
-def vec3(x, y, z):
- return np.array([x, y, z], dtype=np.float32)
-
-
-def radians(v):
- return np.radians(v)
-
-
-def identity():
- return np.identity(4, dtype=np.float32)
-
-
-def empty():
- return np.zeros([4, 4], dtype=np.float32)
-
-
-def magnitude(v):
- return np.linalg.norm(v)
-
-
-def normalize(v):
- m = magnitude(v)
- return v if m == 0 else v / m
-
-
-def dot(u, v):
- return np.sum(u * v)
-
-
-def cross(u, v):
- res = vec3(0, 0, 0)
- res[0] = u[1] * v[2] - u[2] * v[1]
- res[1] = u[2] * v[0] - u[0] * v[2]
- res[2] = u[0] * v[1] - u[1] * v[0]
- return res
-
-
-# below functions can be optimized
-
-
-def translate(m, v):
- res = np.copy(m)
- res[:, 3] = m[:, 0] * v[0] + m[:, 1] * v[1] + m[:, 2] * v[2] + m[:, 3]
- return res
-
-
-def rotate(m, angle, v):
- a = angle
- c = np.cos(a)
- s = np.sin(a)
-
- axis = normalize(v)
- temp = (1 - c) * axis
-
- rot = empty()
- rot[0][0] = c + temp[0] * axis[0]
- rot[0][1] = temp[0] * axis[1] + s * axis[2]
- rot[0][2] = temp[0] * axis[2] - s * axis[1]
-
- rot[1][0] = temp[1] * axis[0] - s * axis[2]
- rot[1][1] = c + temp[1] * axis[1]
- rot[1][2] = temp[1] * axis[2] + s * axis[0]
-
- rot[2][0] = temp[2] * axis[0] + s * axis[1]
- rot[2][1] = temp[2] * axis[1] - s * axis[0]
- rot[2][2] = c + temp[2] * axis[2]
-
- res = empty()
- res[:, 0] = m[:, 0] * rot[0][0] + m[:, 1] * rot[0][1] + m[:, 2] * rot[0][2]
- res[:, 1] = m[:, 0] * rot[1][0] + m[:, 1] * rot[1][1] + m[:, 2] * rot[1][2]
- res[:, 2] = m[:, 0] * rot[2][0] + m[:, 1] * rot[2][1] + m[:, 2] * rot[2][2]
- res[:, 3] = m[:, 3]
- return res
-
-
-def perspective(fovy, aspect, zNear, zFar):
- tanHalfFovy = np.tan(fovy / 2)
-
- res = empty()
- res[0][0] = 1 / (aspect * tanHalfFovy)
- res[1][1] = 1 / (tanHalfFovy)
- res[2][3] = -1
- res[2][2] = -(zFar + zNear) / (zFar - zNear)
- res[3][2] = -(2 * zFar * zNear) / (zFar - zNear)
-
- return res.T
-
-
-def ortho(left, right, bottom, top, zNear, zFar):
- # res = np.ones([4, 4], dtype=np.float32)
- res = identity()
- res[0][0] = 2 / (right - left)
- res[1][1] = 2 / (top - bottom)
- res[2][2] = -2 / (zFar - zNear)
- res[3][0] = -(right + left) / (right - left)
- res[3][1] = -(top + bottom) / (top - bottom)
- res[3][2] = -(zFar + zNear) / (zFar - zNear)
- return res.T
-
-
-def lookat(eye, center, up):
- f = normalize(center - eye)
- s = normalize(cross(f, up))
- u = cross(s, f)
-
- res = identity()
- res[0][0] = s[0]
- res[1][0] = s[1]
- res[2][0] = s[2]
- res[0][1] = u[0]
- res[1][1] = u[1]
- res[2][1] = u[2]
- res[0][2] = -f[0]
- res[1][2] = -f[1]
- res[2][2] = -f[2]
- res[3][0] = -dot(s, eye)
- res[3][1] = -dot(u, eye)
- res[3][2] = -dot(f, eye)
- return res.T
-
-
-def transform(d, m):
- return np.dot(m, d.T).T
diff --git a/spaces/Zaixi/ICLR_FLAG/utils/__init__.py b/spaces/Zaixi/ICLR_FLAG/utils/__init__.py
deleted file mode 100644
index b66cfe3a767f1adc6218dfda70bbd93a77b0b9fc..0000000000000000000000000000000000000000
--- a/spaces/Zaixi/ICLR_FLAG/utils/__init__.py
+++ /dev/null
@@ -1,3 +0,0 @@
-from .dihedral_utils import batch_dihedrals, rotation_matrix_v2, von_Mises_loss
-from .chemutils import get_clique_mol, tree_decomp, get_mol, get_smiles, set_atommap, get_clique_mol_simple, assemble, mol_to_graph_data_obj_simple
-from .dihedral_utils import rotation_matrix_v2, von_Mises_loss, batch_dihedrals
diff --git a/spaces/Zaixi/ICLR_FLAG/utils/dihedral_utils.py b/spaces/Zaixi/ICLR_FLAG/utils/dihedral_utils.py
deleted file mode 100644
index 1aea8c4bb0ec001677659d3bcfcff1f5eae5888a..0000000000000000000000000000000000000000
--- a/spaces/Zaixi/ICLR_FLAG/utils/dihedral_utils.py
+++ /dev/null
@@ -1,383 +0,0 @@
-import torch
-import torch_geometric as tg
-from torch_geometric.utils import degree
-import networkx as nx
-
-device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
-angle_mask_ref = torch.LongTensor([[0, 0, 0, 0, 0, 0],
- [0, 0, 0, 0, 0, 0],
- [1, 0, 0, 0, 0, 0],
- [1, 1, 1, 0, 0, 0],
- [1, 1, 1, 1, 1, 1]]).to(device)
-
-angle_combos = torch.LongTensor([[0, 1],
- [0, 2],
- [1, 2],
- [0, 3],
- [1, 3],
- [2, 3]]).to(device)
-
-
-def get_neighbor_ids(data):
- """
- Takes the edge indices and returns dictionary mapping atom index to neighbor indices
- Note: this only includes atoms with degree > 1
- """
- # start, end = edge_index
- # idxs, vals = torch.unique(start, return_counts=True)
- # vs = torch.split_with_sizes(end, tuple(vals))
- # return {k.item(): v for k, v in zip(idxs, vs) if len(v) > 1}
- neighbors = data.neighbors.pop(0)
- n_atoms_per_mol = data.batch.bincount()
- n_atoms_prev_mol = 0
-
- for i, n_dict in enumerate(data.neighbors):
- new_dict = {}
- n_atoms_prev_mol += n_atoms_per_mol[i].item()
- for k, v in n_dict.items():
- new_dict[k + n_atoms_prev_mol] = v + n_atoms_prev_mol
- neighbors.update(new_dict)
- return neighbors
-
-
-def get_neighbor_bonds(edge_index, bond_type):
- """
- Takes the edge indices and bond type and returns dictionary mapping atom index to neighbor bond types
- Note: this only includes atoms with degree > 1
- """
- start, end = edge_index
- idxs, vals = torch.unique(start, return_counts=True)
- vs = torch.split_with_sizes(bond_type, tuple(vals))
- return {k.item(): v for k, v in zip(idxs, vs) if len(v) > 1}
-
-
-def get_leaf_hydrogens(neighbors, x):
- """
- Takes the edge indices and atom features and returns dictionary mapping atom index to neighbors, indicating true
- for hydrogens that are leaf nodes
- Note: this only works because degree = 1 and hydrogen atomic number = 1 (checks when 1 == 1)
- Note: we use the 5th feature index bc this corresponds to the atomic number
- """
- # start, end = edge_index
- # degrees = degree(end)
- # idxs, vals = torch.unique(start, return_counts=True)
- # vs = torch.split_with_sizes(end, tuple(vals))
- # return {k.item(): degrees[v] == x[v, 5] for k, v in zip(idxs, vs) if len(v) > 1}
- leaf_hydrogens = {}
- h_mask = x[:, 0] == 1
- for k, v in neighbors.items():
- leaf_hydrogens[k] = h_mask[neighbors[k]]
- return leaf_hydrogens
-
-
-def get_dihedral_pairs(edge_index, data):
- """
- Given edge indices, return pairs of indices that we must calculate dihedrals for
- """
- start, end = edge_index
- degrees = degree(end)
- dihedral_pairs_true = torch.nonzero(torch.logical_and(degrees[start] > 1, degrees[end] > 1))
- dihedral_pairs = edge_index[:, dihedral_pairs_true].squeeze(-1)
-
- # # first method which removes one (pseudo) random edge from a cycle
- dihedral_idxs = torch.nonzero(dihedral_pairs.sort(dim=0).indices[0, :] == 0).squeeze().detach().cpu().numpy()
-
- # prioritize rings for assigning dihedrals
- dihedral_pairs = dihedral_pairs.t()[dihedral_idxs]
- G = nx.to_undirected(tg.utils.to_networkx(data))
- cycles = nx.cycle_basis(G)
- keep, sorted_keep = [], []
-
- if len(dihedral_pairs.shape) == 1:
- dihedral_pairs = dihedral_pairs.unsqueeze(0)
-
- for pair in dihedral_pairs:
- x, y = pair
-
- if sorted(pair) in sorted_keep:
- continue
-
- y_cycle_check = [y in cycle for cycle in cycles]
- x_cycle_check = [x in cycle for cycle in cycles]
-
- if any(x_cycle_check) and any(y_cycle_check): # both in new cycle
- cycle_indices = get_current_cycle_indices(cycles, x_cycle_check, x)
- keep.extend(cycle_indices)
-
- sorted_keep.extend([sorted(c) for c in cycle_indices])
- continue
-
- if any(y_cycle_check):
- cycle_indices = get_current_cycle_indices(cycles, y_cycle_check, y)
- keep.append(pair)
- keep.extend(cycle_indices)
-
- sorted_keep.append(sorted(pair))
- sorted_keep.extend([sorted(c) for c in cycle_indices])
- continue
-
- keep.append(pair)
-
- keep = [t.to(device) for t in keep]
- return torch.stack(keep).t()
-
-
-def batch_distance_metrics_from_coords(coords, mask):
- """
- Given coordinates of neighboring atoms, compute bond
- distances and 2-hop distances in local neighborhood
- """
- d_mat_mask = mask.unsqueeze(1) * mask.unsqueeze(2)
-
- if coords.dim() == 4:
- two_dop_d_mat = torch.square(coords.unsqueeze(1) - coords.unsqueeze(2) + 1e-10).sum(dim=-1).sqrt() * d_mat_mask.unsqueeze(-1)
- one_hop_ds = torch.linalg.norm(torch.zeros_like(coords[0]).unsqueeze(0) - coords, dim=-1)
- elif coords.dim() == 5:
- two_dop_d_mat = torch.square(coords.unsqueeze(2) - coords.unsqueeze(3) + 1e-10).sum(dim=-1).sqrt() * d_mat_mask.unsqueeze(-1).unsqueeze(1)
- one_hop_ds = torch.linalg.norm(torch.zeros_like(coords[0]).unsqueeze(0) - coords, dim=-1)
-
- return one_hop_ds, two_dop_d_mat
-
-
-def batch_angle_between_vectors(a, b):
- """
- Compute angle between two batches of input vectors
- """
- inner_product = (a * b).sum(dim=-1)
-
- # norms
- a_norm = torch.linalg.norm(a, dim=-1)
- b_norm = torch.linalg.norm(b, dim=-1)
-
- # protect denominator during division
- den = a_norm * b_norm + 1e-10
- cos = inner_product / den
-
- return cos
-
-
-def batch_angles_from_coords(coords, mask):
- """
- Given coordinates, compute all local neighborhood angles
- """
- if coords.dim() == 4:
- all_possible_combos = coords[:, angle_combos]
- v_a, v_b = all_possible_combos.split(1, dim=2) # does one of these need to be negative?
- angle_mask = angle_mask_ref[mask.sum(dim=1).long()]
- angles = batch_angle_between_vectors(v_a.squeeze(2), v_b.squeeze(2)) * angle_mask.unsqueeze(-1)
- elif coords.dim() == 5:
- all_possible_combos = coords[:, :, angle_combos]
- v_a, v_b = all_possible_combos.split(1, dim=3) # does one of these need to be negative?
- angle_mask = angle_mask_ref[mask.sum(dim=1).long()]
- angles = batch_angle_between_vectors(v_a.squeeze(3), v_b.squeeze(3)) * angle_mask.unsqueeze(-1).unsqueeze(-1)
-
- return angles
-
-
-def batch_local_stats_from_coords(coords, mask):
- """
- Given neighborhood neighbor coordinates, compute bond distances,
- 2-hop distances, and angles in local neighborhood (this assumes
- the central atom has coordinates at the origin)
- """
- one_hop_ds, two_dop_d_mat = batch_distance_metrics_from_coords(coords, mask)
- angles = batch_angles_from_coords(coords, mask)
- return one_hop_ds, two_dop_d_mat, angles
-
-
-def batch_dihedrals(p0, p1, p2, p3, angle=False):
-
- s1 = p1 - p0
- s2 = p2 - p1
- s3 = p3 - p2
-
- sin_d_ = torch.linalg.norm(s2, dim=-1) * torch.sum(s1 * torch.cross(s2, s3, dim=-1), dim=-1)
- cos_d_ = torch.sum(torch.cross(s1, s2, dim=-1) * torch.cross(s2, s3, dim=-1), dim=-1)
-
- if angle:
- return torch.atan2(sin_d_, cos_d_ + 1e-10)
-
- else:
- den = torch.linalg.norm(torch.cross(s1, s2, dim=-1), dim=-1) * torch.linalg.norm(torch.cross(s2, s3, dim=-1), dim=-1) + 1e-10
- return sin_d_/den, cos_d_/den
-
-
-def batch_vector_angles(xn, x, y, yn):
- uT = xn.view(-1, 3)
- uX = x.view(-1, 3)
- uY = y.view(-1, 3)
- uZ = yn.view(-1, 3)
-
- b1 = uT - uX
- b2 = uZ - uY
-
- num = torch.bmm(b1.view(-1, 1, 3), b2.view(-1, 3, 1)).squeeze(-1).squeeze(-1)
- den = torch.linalg.norm(b1, dim=-1) * torch.linalg.norm(b2, dim=-1) + 1e-10
-
- return (num / den).view(-1, 9)
-
-
-def von_Mises_loss(a, b, a_sin=None, b_sin=None):
- """
- :param a: cos of first angle
- :param b: cos of second angle
- :return: difference of cosines
- """
- if torch.is_tensor(a_sin):
- out = a * b + a_sin * b_sin
- else:
- out = a * b + torch.sqrt(1-a**2 + 1e-5) * torch.sqrt(1-b**2 + 1e-5)
- return out
-
-
-def rotation_matrix(neighbor_coords, neighbor_mask, neighbor_map, mu=None):
- """
- Given predicted neighbor coordinates from model, return rotation matrix
-
- :param neighbor_coords: neighbor coordinates for each edge as defined by dihedral_pairs
- (n_dihedral_pairs, 4, n_generated_confs, 3)
- :param neighbor_mask: mask describing which atoms are present (n_dihedral_pairs, 4)
- :param neighbor_map: mask describing which neighbor corresponds to the other central dihedral atom
- (n_dihedral_pairs, 4) each entry in neighbor_map should have one TRUE entry with the rest as FALSE
- :return: rotation matrix (n_dihedral_pairs, n_model_confs, 3, 3)
- """
-
- if not torch.is_tensor(mu):
- # mu = neighbor_coords.sum(dim=1, keepdim=True) / (neighbor_mask.sum(dim=-1, keepdim=True).unsqueeze(-1).unsqueeze(-1) + 1e-10)
- mu_num = neighbor_coords[~neighbor_map.bool()].view(neighbor_coords.size(0), 3, neighbor_coords.size(2), -1).sum(dim=1)
- mu_den = (neighbor_mask.sum(dim=-1, keepdim=True).unsqueeze(-1) - 1 + 1e-10)
- mu = mu_num / mu_den # (n_dihedral_pairs, n_model_confs, 10)
- mu = mu.squeeze(1) # (n_dihedral_pairs, n_model_confs, 10)
-
- p_Y = neighbor_coords[neighbor_map.bool(), :]
- h1 = p_Y / (torch.linalg.norm(p_Y, dim=-1, keepdim=True) + 1e-10) # (n_dihedral_pairs, n_model_confs, 10)
-
- h3_1 = torch.cross(p_Y, mu, dim=-1)
- h3 = h3_1 / (torch.linalg.norm(h3_1, dim=-1, keepdim=True) + 1e-10) # (n_dihedral_pairs, n_model_confs, 10)
-
- h2 = -torch.cross(h1, h3, dim=-1) # (n_dihedral_pairs, n_model_confs, 10)
-
- H = torch.cat([h1.unsqueeze(-2),
- h2.unsqueeze(-2),
- h3.unsqueeze(-2)], dim=-2)
-
- return H
-
-
-def rotation_matrix_v2(neighbor_coords):
- """
- Given predicted neighbor coordinates from model, return rotation matrix
- :param neighbor_coords: y or x coordinates for the x or y center node
- (n_dihedral_pairs, 3)
- :return: rotation matrix (n_dihedral_pairs, 3, 3)
- """
-
- p_Y = neighbor_coords
-
- eta_1 = torch.rand_like(p_Y)
- eta_2 = eta_1 - torch.sum(eta_1 * p_Y, dim=-1, keepdim=True) / (torch.linalg.norm(p_Y, dim=-1, keepdim=True)**2 + 1e-10) * p_Y
- eta = eta_2 / torch.linalg.norm(eta_2, dim=-1, keepdim=True)
-
- h1 = p_Y / (torch.linalg.norm(p_Y, dim=-1, keepdim=True) + 1e-10) # (n_dihedral_pairs, n_model_confs, 10)
-
- h3_1 = torch.cross(p_Y, eta, dim=-1)
- h3 = h3_1 / (torch.linalg.norm(h3_1, dim=-1, keepdim=True) + 1e-10) # (n_dihedral_pairs, n_model_confs, 10)
-
- h2 = -torch.cross(h1, h3, dim=-1) # (n_dihedral_pairs, n_model_confs, 10)
-
- H = torch.cat([h1.unsqueeze(-2),
- h2.unsqueeze(-2),
- h3.unsqueeze(-2)], dim=-2)
-
- return H
-
-
-def signed_volume(local_coords):
- """
- Compute signed volume given ordered neighbor local coordinates
-
- :param local_coords: (n_tetrahedral_chiral_centers, 4, n_generated_confs, 3)
- :return: signed volume of each tetrahedral center (n_tetrahedral_chiral_centers, n_generated_confs)
- """
- v1 = local_coords[:, 0] - local_coords[:, 3]
- v2 = local_coords[:, 1] - local_coords[:, 3]
- v3 = local_coords[:, 2] - local_coords[:, 3]
- cp = v2.cross(v3, dim=-1)
- vol = torch.sum(v1 * cp, dim=-1)
- return torch.sign(vol)
-
-
-def rotation_matrix_inf(neighbor_coords, neighbor_mask, neighbor_map):
- """
- Given predicted neighbor coordinates from model, return rotation matrix
-
- :param neighbor_coords: neighbor coordinates for each edge as defined by dihedral_pairs (4, n_model_confs, 3)
- :param neighbor_mask: mask describing which atoms are present (4)
- :param neighbor_map: mask describing which neighbor corresponds to the other central dihedral atom (4)
- each entry in neighbor_map should have one TRUE entry with the rest as FALSE
- :return: rotation matrix (3, 3)
- """
-
- mu = neighbor_coords.sum(dim=0, keepdim=True) / (neighbor_mask.sum(dim=-1, keepdim=True).unsqueeze(-1) + 1e-10)
- mu = mu.squeeze(0)
- p_Y = neighbor_coords[neighbor_map.bool(), :].squeeze(0)
-
- h1 = p_Y / (torch.linalg.norm(p_Y, dim=-1, keepdim=True) + 1e-10)
-
- h3_1 = torch.cross(p_Y, mu, dim=-1)
- h3 = h3_1 / (torch.linalg.norm(h3_1, dim=-1, keepdim=True) + 1e-10)
-
- h2 = -torch.cross(h1, h3, dim=-1)
-
- H = torch.cat([h1.unsqueeze(-2),
- h2.unsqueeze(-2),
- h3.unsqueeze(-2)], dim=-2)
-
- return H
-
-
-def build_alpha_rotation_inf(alpha, n_model_confs):
-
- H_alpha = torch.FloatTensor([[[1, 0, 0], [0, 0, 0], [0, 0, 0]]]).repeat(n_model_confs, 1, 1)
- H_alpha[:, 1, 1] = torch.cos(alpha)
- H_alpha[:, 1, 2] = -torch.sin(alpha)
- H_alpha[:, 2, 1] = torch.sin(alpha)
- H_alpha[:, 2, 2] = torch.cos(alpha)
-
- return H_alpha
-
-
-def random_rotation_matrix(dim):
- yaw = torch.rand(dim)
- pitch = torch.rand(dim)
- roll = torch.rand(dim)
-
- R = torch.stack([torch.stack([torch.cos(yaw) * torch.cos(pitch),
- torch.cos(yaw) * torch.sin(pitch) * torch.sin(roll) - torch.sin(yaw) * torch.cos(
- roll),
- torch.cos(yaw) * torch.sin(pitch) * torch.cos(roll) + torch.sin(yaw) * torch.sin(
- roll)], dim=-1),
- torch.stack([torch.sin(yaw) * torch.cos(pitch),
- torch.sin(yaw) * torch.sin(pitch) * torch.sin(roll) + torch.cos(yaw) * torch.cos(
- roll),
- torch.sin(yaw) * torch.sin(pitch) * torch.cos(roll) - torch.cos(yaw) * torch.sin(
- roll)], dim=-1),
- torch.stack([-torch.sin(pitch),
- torch.cos(pitch) * torch.sin(roll),
- torch.cos(pitch) * torch.cos(roll)], dim=-1)], dim=-2)
-
- return R
-
-
-def length_to_mask(length, max_len=None, dtype=None):
- """length: B.
- return B x max_len.
- If max_len is None, then max of length will be used.
- """
- assert len(length.shape) == 1, 'Length shape should be 1 dimensional.'
- max_len = max_len or length.max().item()
- mask = torch.arange(max_len, device=length.device,
- dtype=length.dtype).expand(len(length), max_len) < length.unsqueeze(1)
- if dtype is not None:
- mask = torch.as_tensor(mask, dtype=dtype, device=length.device)
- return mask
diff --git a/spaces/abby711/FaceRestoration/gfpgan/weights/README.md b/spaces/abby711/FaceRestoration/gfpgan/weights/README.md
deleted file mode 100644
index 4d7b7e642591ef88575d9e6c360a4d29e0cc1a4f..0000000000000000000000000000000000000000
--- a/spaces/abby711/FaceRestoration/gfpgan/weights/README.md
+++ /dev/null
@@ -1,3 +0,0 @@
-# Weights
-
-Put the downloaded weights to this folder.
diff --git a/spaces/abdvl/datahub_qa_bot/docs/how/kafka-config.md b/spaces/abdvl/datahub_qa_bot/docs/how/kafka-config.md
deleted file mode 100644
index f3f81c3d07c01462f675931e26e79ea4f39a13f6..0000000000000000000000000000000000000000
--- a/spaces/abdvl/datahub_qa_bot/docs/how/kafka-config.md
+++ /dev/null
@@ -1,220 +0,0 @@
----
-title: "Configuring Kafka"
-hide_title: true
----
-
-# Configuring Kafka in DataHub
-
-DataHub requires Kafka to operate. Kafka is used as a durable log that can be used to store inbound
-requests to update the Metadata Graph (Metadata Change Proposal), or as a change log detailing the updates
-that have been made to the Metadata Graph (Metadata Change Log).
-
-## Environment Variables
-
-The following environment variables can be used to customize DataHub's connection to Kafka for the following DataHub components,
-each of which requires a connection to Kafka:
-
-- `metadata-service` (datahub-gms container)
-- (Advanced - if standalone consumers are deployed) `mce-consumer-job` (datahub-mce-consumer container)
-- (Advanced - if standalone consumers are deployed) `mae-consumer-job` (datahub-mae-consumer container)
-- (Advanced - if product analytics are enabled) datahub-frontend
-
-### Connection Configuration
-
-With the exception of `KAFKA_BOOTSTRAP_SERVER` and `KAFKA_SCHEMAREGISTRY_URL`, Kafka is configured via [spring-boot](https://spring.io/projects/spring-boot), specifically with [KafkaProperties](https://docs.spring.io/spring-boot/docs/current/api/org/springframework/boot/autoconfigure/kafka/KafkaProperties.html). See [Integration Properties](https://docs.spring.io/spring-boot/docs/current/reference/html/appendix-application-properties.html#integration-properties) prefixed with `spring.kafka`.
-
-Below is an example of how SASL/GSSAPI properties can be configured via environment variables:
-
-```bash
-export KAFKA_BOOTSTRAP_SERVER=broker:29092
-export KAFKA_SCHEMAREGISTRY_URL=http://schema-registry:8081
-export SPRING_KAFKA_PROPERTIES_SASL_KERBEROS_SERVICE_NAME=kafka
-export SPRING_KAFKA_PROPERTIES_SECURITY_PROTOCOL=SASL_PLAINTEXT
-export SPRING_KAFKA_PROPERTIES_SASL_JAAS_CONFIG=com.sun.security.auth.module.Krb5LoginModule required principal='principal@REALM' useKeyTab=true storeKey=true keyTab='/keytab';
-```
-
-#### Example: Connecting using AWS IAM (MSK)
-
-Here is another example of how SASL_SSL can be configured for AWS_MSK_IAM when connecting to MSK using IAM via environment variables
-```bash
-SPRING_KAFKA_PROPERTIES_SECURITY_PROTOCOL=SASL_SSL
-SPRING_KAFKA_PROPERTIES_SSL_TRUSTSTORE_LOCATION=/tmp/kafka.client.truststore.jks
-SPRING_KAFKA_PROPERTIES_SASL_MECHANISM=AWS_MSK_IAM
-SPRING_KAFKA_PROPERTIES_SASL_JAAS_CONFIG=software.amazon.msk.auth.iam.IAMLoginModule required;
-SPRING_KAFKA_PROPERTIES_SASL_CLIENT_CALLBACK_HANDLER_CLASS=software.amazon.msk.auth.iam.IAMClientCallbackHandler
-```
-
-For more information about configuring these variables, check out Spring's [Externalized Configuration](https://docs.spring.io/spring-boot/docs/current/reference/html/spring-boot-features.html#boot-features-external-config) to see how this works.
-Also see [Kafka Connect Security](https://docs.confluent.io/current/connect/security.html) for more ways to connect.
-
-
-### Topic Configuration
-
-By default, DataHub relies on the a set of Kafka topics to operate. By default, they have the following names:
-
-- **MetadataChangeProposal_v1**
-- **FailedMetadataChangeProposal_v1**
-- **MetadataChangeLog_Versioned_v1**
-- **MetadataChangeLog_Timeseries_v1**
-- **DataHubUsageEvent_v1**: User behavior tracking event for UI
-6. (Deprecated) **MetadataChangeEvent_v4**: Metadata change proposal messages
-7. (Deprecated) **MetadataAuditEvent_v4**: Metadata change log messages
-8. (Deprecated) **FailedMetadataChangeEvent_v4**: Failed to process #1 event
-
-These topics are discussed at more length in [Metadata Events](../what/mxe.md).
-
-We've included environment variables to customize the name each of these topics, for cases where an organization has naming rules for your topics.
-
-### Metadata Service (datahub-gms)
-
-The following are environment variables you can use to configure topic names used in the Metadata Service container:
-
-- `METADATA_CHANGE_PROPOSAL_TOPIC_NAME`: The name of the topic for Metadata Change Proposals emitted by the ingestion framework.
-- `FAILED_METADATA_CHANGE_PROPOSAL_TOPIC_NAME`: The name of the topic for Metadata Change Proposals emitted when MCPs fail processing.
-- `METADATA_CHANGE_LOG_VERSIONED_TOPIC_NAME`: The name of the topic for Metadata Change Logs that are produced for Versioned Aspects.
-- `METADATA_CHANGE_LOG_TIMESERIES_TOPIC_NAME`: The name of the topic for Metadata Change Logs that are produced for Timeseries Aspects.
-- `PLATFORM_EVENT_TOPIC_NAME`: The name of the topic for Platform Events (high-level semantic events).
-- `DATAHUB_USAGE_EVENT_NAME`: The name of the topic for product analytics events.
-- (Deprecated) `METADATA_CHANGE_EVENT_NAME`: The name of the metadata change event topic.
-- (Deprecated) `METADATA_AUDIT_EVENT_NAME`: The name of the metadata audit event topic.
-- (Deprecated) `FAILED_METADATA_CHANGE_EVENT_NAME`: The name of the failed metadata change event topic.
-
-### MCE Consumer (datahub-mce-consumer)
-
-- `METADATA_CHANGE_PROPOSAL_TOPIC_NAME`: The name of the topic for Metadata Change Proposals emitted by the ingestion framework.
-- `FAILED_METADATA_CHANGE_PROPOSAL_TOPIC_NAME`: The name of the topic for Metadata Change Proposals emitted when MCPs fail processing.
-- (Deprecated) `METADATA_CHANGE_EVENT_NAME`: The name of the deprecated topic that an embedded MCE consumer will consume from.
-- (Deprecated) `FAILED_METADATA_CHANGE_EVENT_NAME`: The name of the deprecated topic that failed MCEs will be written to.
-
-### MAE Consumer (datahub-mae-consumer)
-
-- `METADATA_CHANGE_LOG_VERSIONED_TOPIC_NAME`: The name of the topic for Metadata Change Logs that are produced for Versioned Aspects.
-- `METADATA_CHANGE_LOG_TIMESERIES_TOPIC_NAME`: The name of the topic for Metadata Change Logs that are produced for Timeseries Aspects.
-- `PLATFORM_EVENT_TOPIC_NAME`: The name of the topic for Platform Events (high-level semantic events).
-- `DATAHUB_USAGE_EVENT_NAME`: The name of the topic for product analytics events.
-- (Deprecated) `METADATA_AUDIT_EVENT_NAME`: The name of the deprecated metadata audit event topic.
-
-### DataHub Frontend (datahub-frontend-react)
-
-- `DATAHUB_TRACKING_TOPIC`: The name of the topic used for storing DataHub usage events.
-It should contain the same value as `DATAHUB_USAGE_EVENT_NAME` in the Metadata Service container.
-
-Please ensure that these environment variables are set consistently throughout your ecosystem. DataHub has a few different applications running which communicate with Kafka (see above).
-
-## Configuring Consumer Group Id
-
-Kafka Consumers in Spring are configured using Kafka listeners. By default, consumer group id is same as listener id.
-
-We've included an environment variable to customize the consumer group id, if your company or organization has specific naming rules.
-
-### datahub-mce-consumer and datahub-mae-consumer
-
-- `KAFKA_CONSUMER_GROUP_ID`: The name of the kafka consumer's group id.
-
-## Applying Configurations
-
-### Docker
-
-Simply add the above environment variables to the required `docker.env` files for the containers. These can
-be found inside the `docker` folder of the repository.
-
-
-### Helm
-
-On Helm, you'll need to configure these environment variables using the `extraEnvs` sections of the specific container's
-configurations inside your `values.yaml` file.
-```
-datahub-gms:
- ...
- extraEnvs:
- - name: METADATA_CHANGE_PROPOSAL_TOPIC_NAME
- value: "CustomMetadataChangeProposal_v1"
- - name: METADATA_CHANGE_LOG_VERSIONED_TOPIC_NAME
- value: "CustomMetadataChangeLogVersioned_v1"
- - name: FAILED_METADATA_CHANGE_PROPOSAL_TOPIC_NAME
- value: "CustomFailedMetadataChangeProposal_v1"
- - name: KAFKA_CONSUMER_GROUP_ID
- value: "my-apps-mae-consumer"
- ....
-
-datahub-frontend:
- ...
- extraEnvs:
- - name: DATAHUB_TRACKING_TOPIC
- value: "MyCustomTrackingEvent"
-
-# If standalone consumers are enabled
-datahub-mae-consumer;
- extraEnvs:
- - name: METADATA_CHANGE_LOG_VERSIONED_TOPIC_NAME
- value: "CustomMetadataChangeLogVersioned_v1"
- ....
- - name: METADATA_AUDIT_EVENT_NAME
- value: "MetadataAuditEvent"
-datahub-mce-consumer;
- extraEnvs:
- - name: METADATA_CHANGE_PROPOSAL_TOPIC_NAME
- value: "CustomMetadataChangeLogVersioned_v1"
- ....
- - name: METADATA_CHANGE_EVENT_NAME
- value: "MetadataChangeEvent"
- ....
-```
-
-## Other Components that use Kafka can be configured using environment variables:
-- kafka-setup
-- schema-registry
-
-## SASL/GSSAPI properties for kafka-setup and datahub-frontend via environment variables
-```bash
-KAFKA_BOOTSTRAP_SERVER=broker:29092
-KAFKA_SCHEMAREGISTRY_URL=http://schema-registry:8081
-KAFKA_PROPERTIES_SASL_KERBEROS_SERVICE_NAME=kafka
-KAFKA_PROPERTIES_SECURITY_PROTOCOL=SASL_PLAINTEXT
-KAFKA_PROPERTIES_SASL_JAAS_CONFIG=com.sun.security.auth.module.Krb5LoginModule required principal='principal@REALM' useKeyTab=true storeKey=true keyTab='/keytab';
-```
-
-## SASL/GSSAPI properties for schema-registry via environment variables
-```bash
-SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS=broker:29092
-SCHEMA_REGISTRY_KAFKASTORE_SASL_KERBEROS_SERVICE_NAME=kafka
-SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL=SASL_PLAINTEXT
-SCHEMA_REGISTRY_KAFKASTORE_SASL_JAAS_CONFIG=com.sun.security.auth.module.Krb5LoginModule required principal='principal@REALM' useKeyTab=true storeKey=true keyTab='/keytab';
-```
-
-
-## SSL
-
-### Kafka
-
-We are using the Spring Boot framework to start our apps, including setting up Kafka. You can
-[use environment variables to set system properties](https://docs.spring.io/spring-boot/docs/current/reference/html/spring-boot-features.html#boot-features-external-config-relaxed-binding-from-environment-variables),
-including [Kafka properties](https://docs.spring.io/spring-boot/docs/current/reference/html/appendix-application-properties.html#integration-properties).
-From there you can set your SSL configuration for Kafka.
-
-### Schema Registry
-If Schema Registry is configured to use security (SSL), then you also need to set additional values.
-
-The [MCE](../../metadata-jobs/mce-consumer-job) and [MAE](../../metadata-jobs/mae-consumer-job) consumers can set
-default Spring Kafka environment values, for example:
-- `SPRING_KAFKA_PROPERTIES_SCHEMA_REGISTRY_SECURITY_PROTOCOL`
-- `SPRING_KAFKA_PROPERTIES_SCHEMA_REGISTRY_SSL_KEYSTORE_LOCATION`
-- `SPRING_KAFKA_PROPERTIES_SCHEMA_REGISTRY_SSL_KEYSTORE_PASSWORD`
-- `SPRING_KAFKA_PROPERTIES_SCHEMA_REGISTRY_SSL_TRUSTSTORE_LOCATION`
-- `SPRING_KAFKA_PROPERTIES_SCHEMA_REGISTRY_SSL_TRUSTSTORE_PASSWORD`
-
-[GMS](../what/gms.md) can set the following environment variables that will be passed as properties when creating the Schema Registry
-Client.
-- `KAFKA_SCHEMA_REGISTRY_SECURITY_PROTOCOL`
-- `KAFKA_SCHEMA_REGISTRY_SSL_KEYSTORE_LOCATION`
-- `KAFKA_SCHEMA_REGISTRY_SSL_KEYSTORE_PASSWORD`
-- `KAFKA_SCHEMA_REGISTRY_SSL_TRUSTSTORE_LOCATION`
-- `KAFKA_SCHEMA_REGISTRY_SSL_TRUSTSTORE_PASSWORD`
-
-> **Note** In the logs you might see something like
-> `The configuration 'kafkastore.ssl.truststore.password' was supplied but isn't a known config.` The configuration is
-> not a configuration required for the producer. These WARN message can be safely ignored. Each of Datahub services are
-> passed a full set of configuration but may not require all the configurations that are passed to them. These warn
-> messages indicate that the service was passed a configuration that is not relevant to it and can be safely ignored.
-
->Other errors: `Failed to start bean 'org.springframework.kafka.config.internalKafkaListenerEndpointRegistry'; nested exception is org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [DataHubUsageEvent_v1]`. Please check ranger permissions or kafka broker logs.
diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/exp/upernet_global_small/test_config_g.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/exp/upernet_global_small/test_config_g.py
deleted file mode 100644
index e43737a98a3b174a9f2fe059c06d511144686459..0000000000000000000000000000000000000000
--- a/spaces/abhishek/sketch-to-image/annotator/uniformer/exp/upernet_global_small/test_config_g.py
+++ /dev/null
@@ -1,38 +0,0 @@
-_base_ = [
- '../../configs/_base_/models/upernet_uniformer.py',
- '../../configs/_base_/datasets/ade20k.py',
- '../../configs/_base_/default_runtime.py',
- '../../configs/_base_/schedules/schedule_160k.py'
-]
-model = dict(
- backbone=dict(
- type='UniFormer',
- embed_dim=[64, 128, 320, 512],
- layers=[3, 4, 8, 3],
- head_dim=64,
- drop_path_rate=0.25,
- windows=False,
- hybrid=False,
- ),
- decode_head=dict(
- in_channels=[64, 128, 320, 512],
- num_classes=150
- ),
- auxiliary_head=dict(
- in_channels=320,
- num_classes=150
- ))
-
-# AdamW optimizer, no weight decay for position embedding & layer norm in backbone
-optimizer = dict(_delete_=True, type='AdamW', lr=0.00006, betas=(0.9, 0.999), weight_decay=0.01,
- paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.),
- 'relative_position_bias_table': dict(decay_mult=0.),
- 'norm': dict(decay_mult=0.)}))
-
-lr_config = dict(_delete_=True, policy='poly',
- warmup='linear',
- warmup_iters=1500,
- warmup_ratio=1e-6,
- power=1.0, min_lr=0.0, by_epoch=False)
-
-data=dict(samples_per_gpu=2)
\ No newline at end of file
diff --git a/spaces/abrar-lohia/text-2-character-anim/VQTrans/models/smpl.py b/spaces/abrar-lohia/text-2-character-anim/VQTrans/models/smpl.py
deleted file mode 100644
index c4229abdcc9fd2cc8c1951005ca842e1ca191fae..0000000000000000000000000000000000000000
--- a/spaces/abrar-lohia/text-2-character-anim/VQTrans/models/smpl.py
+++ /dev/null
@@ -1,97 +0,0 @@
-# This code is based on https://github.com/Mathux/ACTOR.git
-import numpy as np
-import torch
-
-import contextlib
-
-from smplx import SMPLLayer as _SMPLLayer
-from smplx.lbs import vertices2joints
-
-
-# action2motion_joints = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 21, 24, 38]
-# change 0 and 8
-action2motion_joints = [8, 1, 2, 3, 4, 5, 6, 7, 0, 9, 10, 11, 12, 13, 14, 21, 24, 38]
-
-from VQTrans.utils.config import SMPL_MODEL_PATH, JOINT_REGRESSOR_TRAIN_EXTRA
-
-JOINTSTYPE_ROOT = {"a2m": 0, # action2motion
- "smpl": 0,
- "a2mpl": 0, # set(smpl, a2m)
- "vibe": 8} # 0 is the 8 position: OP MidHip below
-
-JOINT_MAP = {
- 'OP Nose': 24, 'OP Neck': 12, 'OP RShoulder': 17,
- 'OP RElbow': 19, 'OP RWrist': 21, 'OP LShoulder': 16,
- 'OP LElbow': 18, 'OP LWrist': 20, 'OP MidHip': 0,
- 'OP RHip': 2, 'OP RKnee': 5, 'OP RAnkle': 8,
- 'OP LHip': 1, 'OP LKnee': 4, 'OP LAnkle': 7,
- 'OP REye': 25, 'OP LEye': 26, 'OP REar': 27,
- 'OP LEar': 28, 'OP LBigToe': 29, 'OP LSmallToe': 30,
- 'OP LHeel': 31, 'OP RBigToe': 32, 'OP RSmallToe': 33, 'OP RHeel': 34,
- 'Right Ankle': 8, 'Right Knee': 5, 'Right Hip': 45,
- 'Left Hip': 46, 'Left Knee': 4, 'Left Ankle': 7,
- 'Right Wrist': 21, 'Right Elbow': 19, 'Right Shoulder': 17,
- 'Left Shoulder': 16, 'Left Elbow': 18, 'Left Wrist': 20,
- 'Neck (LSP)': 47, 'Top of Head (LSP)': 48,
- 'Pelvis (MPII)': 49, 'Thorax (MPII)': 50,
- 'Spine (H36M)': 51, 'Jaw (H36M)': 52,
- 'Head (H36M)': 53, 'Nose': 24, 'Left Eye': 26,
- 'Right Eye': 25, 'Left Ear': 28, 'Right Ear': 27
-}
-
-JOINT_NAMES = [
- 'OP Nose', 'OP Neck', 'OP RShoulder',
- 'OP RElbow', 'OP RWrist', 'OP LShoulder',
- 'OP LElbow', 'OP LWrist', 'OP MidHip',
- 'OP RHip', 'OP RKnee', 'OP RAnkle',
- 'OP LHip', 'OP LKnee', 'OP LAnkle',
- 'OP REye', 'OP LEye', 'OP REar',
- 'OP LEar', 'OP LBigToe', 'OP LSmallToe',
- 'OP LHeel', 'OP RBigToe', 'OP RSmallToe', 'OP RHeel',
- 'Right Ankle', 'Right Knee', 'Right Hip',
- 'Left Hip', 'Left Knee', 'Left Ankle',
- 'Right Wrist', 'Right Elbow', 'Right Shoulder',
- 'Left Shoulder', 'Left Elbow', 'Left Wrist',
- 'Neck (LSP)', 'Top of Head (LSP)',
- 'Pelvis (MPII)', 'Thorax (MPII)',
- 'Spine (H36M)', 'Jaw (H36M)',
- 'Head (H36M)', 'Nose', 'Left Eye',
- 'Right Eye', 'Left Ear', 'Right Ear'
-]
-
-
-# adapted from VIBE/SPIN to output smpl_joints, vibe joints and action2motion joints
-class SMPL(_SMPLLayer):
- """ Extension of the official SMPL implementation to support more joints """
-
- def __init__(self, model_path=SMPL_MODEL_PATH, **kwargs):
- kwargs["model_path"] = model_path
-
- # remove the verbosity for the 10-shapes beta parameters
- with contextlib.redirect_stdout(None):
- super(SMPL, self).__init__(**kwargs)
-
- J_regressor_extra = np.load(JOINT_REGRESSOR_TRAIN_EXTRA)
- self.register_buffer('J_regressor_extra', torch.tensor(J_regressor_extra, dtype=torch.float32))
- vibe_indexes = np.array([JOINT_MAP[i] for i in JOINT_NAMES])
- a2m_indexes = vibe_indexes[action2motion_joints]
- smpl_indexes = np.arange(24)
- a2mpl_indexes = np.unique(np.r_[smpl_indexes, a2m_indexes])
-
- self.maps = {"vibe": vibe_indexes,
- "a2m": a2m_indexes,
- "smpl": smpl_indexes,
- "a2mpl": a2mpl_indexes}
-
- def forward(self, *args, **kwargs):
- smpl_output = super(SMPL, self).forward(*args, **kwargs)
-
- extra_joints = vertices2joints(self.J_regressor_extra, smpl_output.vertices)
- all_joints = torch.cat([smpl_output.joints, extra_joints], dim=1)
-
- output = {"vertices": smpl_output.vertices}
-
- for joinstype, indexes in self.maps.items():
- output[joinstype] = all_joints[:, indexes]
-
- return output
\ No newline at end of file
diff --git a/spaces/ahmetfirat/KORKUT_A_Spacetime_Odyssey/README.md b/spaces/ahmetfirat/KORKUT_A_Spacetime_Odyssey/README.md
deleted file mode 100644
index 76b21864ee9174c37e8a5aba82f83b997a86b231..0000000000000000000000000000000000000000
--- a/spaces/ahmetfirat/KORKUT_A_Spacetime_Odyssey/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: KORKUT A Spacetime Odyssey
-emoji: 🚀
-colorFrom: red
-colorTo: blue
-sdk: gradio
-sdk_version: 3.28.2
-app_file: app.py
-pinned: false
-python_version: 3.9
-license: other
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/akhaliq/RealBasicVSR/app.py b/spaces/akhaliq/RealBasicVSR/app.py
deleted file mode 100644
index fcd31ed4484945428b5bc1338125fd4f67f55f77..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/RealBasicVSR/app.py
+++ /dev/null
@@ -1,37 +0,0 @@
-import os
-import sys
-
-os.system("pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cpu/torch1.10/index.html")
-os.system("pip install mmedit")
-os.system("git clone https://github.com/ckkelvinchan/RealBasicVSR.git")
-os.chdir("RealBasicVSR")
-os.system("wget https://upload.wikimedia.org/wikipedia/commons/thumb/e/ec/Mona_Lisa%2C_by_Leonardo_da_Vinci%2C_from_C2RMF_retouched.jpg/800px-Mona_Lisa%2C_by_Leonardo_da_Vinci%2C_from_C2RMF_retouched.jpg -O mona.jpg")
-import gradio as gr
-os.system("wget https://huggingface.co/akhaliq/RealBasicVSR_x4/resolve/main/RealBasicVSR_x4.pth")
-sys.path.append("RealBasicVSR")
-
-os.mkdir("test")
-from PIL import Image
-
-
-def resize(width,img):
- basewidth = width
- wpercent = (basewidth/float(img.size[0]))
- hsize = int((float(img.size[1])*float(wpercent)))
- img = img.resize((basewidth,hsize), Image.ANTIALIAS)
- return img
-
-def inference(img):
- img = resize(256,img)
- img.save("test/test.png")
- os.system("python inference_realbasicvsr.py configs/realbasicvsr_x4.py RealBasicVSR_x4.pth test/ results/demo_000")
- return "results/demo_000/test.png"
-
-title="RealBasicVSR"
-description="Gradio demo for Investigating Tradeoffs in Real-World Video Super-Resolution. To use it, simply upload your image or click on one of the examples to load them. Read more at the links below."
-
-article = "Investigating Tradeoffs in Real-World Video Super-Resolution | Github Repo
"
-
-examples=[['mona.jpg']]
-
-gr.Interface(inference,gr.inputs.Image(type="pil"),gr.outputs.Image(type="file"),title=title,description=description,article=article,examples=examples).launch(enable_queue=True)
diff --git a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/README.md b/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/README.md
deleted file mode 100644
index 197444c11e73febf0da40a5db49f9908dfe598f3..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/VQMIVC/ParallelWaveGAN/egs/README.md
+++ /dev/null
@@ -1,165 +0,0 @@
-# Kaldi-style all-in-one recipes
-
-This repository provides [Kaldi](https://github.com/kaldi-asr/kaldi)-style recipes, as the same as [ESPnet](https://github.com/espnet/espnet).
-Currently, the following recipes are supported.
-
-- [LJSpeech](https://keithito.com/LJ-Speech-Dataset/): English female speaker
-- [JSUT](https://sites.google.com/site/shinnosuketakamichi/publication/jsut): Japanese female speaker
-- [JSSS](https://sites.google.com/site/shinnosuketakamichi/research-topics/jsss_corpus): Japanese female speaker
-- [CSMSC](https://www.data-baker.com/open_source.html): Mandarin female speaker
-- [CMU Arctic](http://www.festvox.org/cmu_arctic/): English speakers
-- [JNAS](http://research.nii.ac.jp/src/en/JNAS.html): Japanese multi-speaker
-- [VCTK](https://homepages.inf.ed.ac.uk/jyamagis/page3/page58/page58.html): English multi-speaker
-- [LibriTTS](https://arxiv.org/abs/1904.02882): English multi-speaker
-- [YesNo](https://arxiv.org/abs/1904.02882): English speaker (For debugging)
-
-
-## How to run the recipe
-
-```bash
-# Let us move on the recipe directory
-$ cd egs/ljspeech/voc1
-
-# Run the recipe from scratch
-$ ./run.sh
-
-# You can change config via command line
-$ ./run.sh --conf
-
-# You can select the stage to start and stop
-$ ./run.sh --stage 2 --stop_stage 2
-
-# If you want to specify the gpu
-$ CUDA_VISIBLE_DEVICES=1 ./run.sh --stage 2
-
-# If you want to resume training from 10000 steps checkpoint
-$ ./run.sh --stage 2 --resume //checkpoint-10000steps.pkl
-```
-
-You can check the command line options in `run.sh`.
-
-The integration with job schedulers such as [slurm](https://slurm.schedmd.com/documentation.html) can be done via `cmd.sh` and `conf/slurm.conf`.
-If you want to use it, please check [this page](https://kaldi-asr.org/doc/queue.html).
-
-All of the hyperparameters are written in a single yaml format configuration file.
-Please check [this example](https://github.com/kan-bayashi/ParallelWaveGAN/blob/master/egs/ljspeech/voc1/conf/parallel_wavegan.v1.yaml) in ljspeech recipe.
-
-You can monitor the training progress via tensorboard.
-
-```bash
-$ tensorboard --logdir exp
-```
-
-
-
-If you want to accelerate the training, you can try distributed multi-gpu training based on apex.
-You need to install apex for distributed training. Please make sure you already installed it.
-Then you can run distributed multi-gpu training via following command:
-
-```bash
-# in the case of the number of gpus = 8
-$ CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7" ./run.sh --stage 2 --n_gpus 8
-```
-
-In the case of distributed training, the batch size will be automatically multiplied by the number of gpus.
-Please be careful.
-
-## How to make the recipe for your own dateset
-
-Here, I will show how to make the recipe for your own dataset.
-
-1. Setup your dataset to be the following structure.
-
- ```bash
- # For single-speaker case
- $ tree /path/to/databse
- /path/to/database
- ├── utt_1.wav
- ├── utt_2.wav
- │ ...
- └── utt_N.wav
- # The directory can be nested, but each filename must be unique
-
- # For multi-speaker case
- $ tree /path/to/databse
- /path/to/database
- ├── spk_1
- │ ├── utt1.wav
- ├── spk_2
- │ ├── utt1.wav
- │ ...
- └── spk_N
- ├── utt1.wav
- ...
- # The directory under each speaker can be nested, but each filename in each speaker directory must be unique
- ```
-
-2. Copy the template directory.
-
- ```bash
- cd egs
-
- # For single speaker case
- cp -r template_single_spk
-
- # For multi speaker case
- cp -r template_multi_spk
-
- # Move on your recipe
- cd egs//voc1
- ```
-
-3. Modify the options in `run.sh`.
- What you need to change at least in `run.sh` is as follows:
- - `db_root`: Root path of the database.
- - `num_dev`: The number of utterances for development set.
- - `num_eval`: The number of utterances for evaluation set.
-
-4. Modify the hyperpameters in `conf/parallel_wavegan.v1.yaml`.
- What you need to change at least in config is as follows:
- - `sampling_rate`: If you can specify the lower sampling rate, the audio will be downsampled by sox.
-
-5. (Optional) Change command backend in `cmd.sh`.
- If you are not familiar with kaldi and run in your local env, you do not need to change.
- See more info on https://kaldi-asr.org/doc/queue.html.
-
-6. Run your recipe.
-
- ```bash
- # Run all stages from the first stage
- ./run.sh
-
- # If you want to specify CUDA device
- CUDA_VISIBLE_DEVICES=0 ./run.sh
- ```
-
-If you want to try the other advanced model, please check the config files in `egs/ljspeech/voc1/conf`.
-
-## Run training using ESPnet2-TTS recipe within 5 minutes
-
-Make sure already you finished the espnet2-tts recipe experiments (at least starting the training).
-
-```bash
-cd egs
-
-# Please use single spk template for both single and multi spk case
-cp -r template_single_spk
-
-# Move on your recipe
-cd egs//voc1
-
-# Make symlink of data directory (Better to use absolute path)
-mkdir dump data
-ln -s /path/to/espnet/egs2//tts1/dump/raw dump/
-ln -s /path/to/espnet/egs2//tts1/dump/raw/tr_no_dev data/train_nodev
-ln -s /path/to/espnet/egs2//tts1/dump/raw/dev data/dev
-ln -s /path/to/espnet/egs2//tts1/dump/raw/eval1 data/eval
-
-# Edit config to match TTS model setting
-vim conf/parallel_wavegan.v1.yaml
-
-# Run from stage 1
-./run.sh --stage 1 --conf conf/parallel_wavegan.v1.yaml
-```
-
-That's it!
diff --git a/spaces/akhaliq/lama/bin/paper_runfiles/generate_test_paris_256.sh b/spaces/akhaliq/lama/bin/paper_runfiles/generate_test_paris_256.sh
deleted file mode 100644
index 67061298b601ce4e1c37966852421f2153a0d686..0000000000000000000000000000000000000000
--- a/spaces/akhaliq/lama/bin/paper_runfiles/generate_test_paris_256.sh
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/usr/bin/env bash
-
-# paths to data are valid for mml-ws01
-OUT_DIR="/media/inpainting/paper_data/Paris_StreetView_Dataset_val_256"
-
-source "$(dirname $0)/env.sh"
-
-for datadir in paris_eval_gt
-do
- for conf in random_thin_256 random_medium_256 random_thick_256 segm_256
- do
- "$BINDIR/gen_mask_dataset_hydra.py" -cn $conf datadir=$datadir location=mml-ws01-paris \
- location.out_dir=$OUT_DIR cropping.out_square_crop=False cropping.out_min_size=256
-
- "$BINDIR/calc_dataset_stats.py" --samples-n 20 "$OUT_DIR/$datadir/$conf" "$OUT_DIR/$datadir/${conf}_stats"
- done
-done
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/commands/search.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/commands/search.py
deleted file mode 100644
index 03ed925b246dd551ec2ef45095ed6cad00fd2745..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/commands/search.py
+++ /dev/null
@@ -1,174 +0,0 @@
-import logging
-import shutil
-import sys
-import textwrap
-import xmlrpc.client
-from collections import OrderedDict
-from optparse import Values
-from typing import TYPE_CHECKING, Dict, List, Optional
-
-from pip._vendor.packaging.version import parse as parse_version
-
-from pip._internal.cli.base_command import Command
-from pip._internal.cli.req_command import SessionCommandMixin
-from pip._internal.cli.status_codes import NO_MATCHES_FOUND, SUCCESS
-from pip._internal.exceptions import CommandError
-from pip._internal.metadata import get_default_environment
-from pip._internal.models.index import PyPI
-from pip._internal.network.xmlrpc import PipXmlrpcTransport
-from pip._internal.utils.logging import indent_log
-from pip._internal.utils.misc import write_output
-
-if TYPE_CHECKING:
- from typing import TypedDict
-
- class TransformedHit(TypedDict):
- name: str
- summary: str
- versions: List[str]
-
-
-logger = logging.getLogger(__name__)
-
-
-class SearchCommand(Command, SessionCommandMixin):
- """Search for PyPI packages whose name or summary contains ."""
-
- usage = """
- %prog [options] """
- ignore_require_venv = True
-
- def add_options(self) -> None:
- self.cmd_opts.add_option(
- "-i",
- "--index",
- dest="index",
- metavar="URL",
- default=PyPI.pypi_url,
- help="Base URL of Python Package Index (default %default)",
- )
-
- self.parser.insert_option_group(0, self.cmd_opts)
-
- def run(self, options: Values, args: List[str]) -> int:
- if not args:
- raise CommandError("Missing required argument (search query).")
- query = args
- pypi_hits = self.search(query, options)
- hits = transform_hits(pypi_hits)
-
- terminal_width = None
- if sys.stdout.isatty():
- terminal_width = shutil.get_terminal_size()[0]
-
- print_results(hits, terminal_width=terminal_width)
- if pypi_hits:
- return SUCCESS
- return NO_MATCHES_FOUND
-
- def search(self, query: List[str], options: Values) -> List[Dict[str, str]]:
- index_url = options.index
-
- session = self.get_default_session(options)
-
- transport = PipXmlrpcTransport(index_url, session)
- pypi = xmlrpc.client.ServerProxy(index_url, transport)
- try:
- hits = pypi.search({"name": query, "summary": query}, "or")
- except xmlrpc.client.Fault as fault:
- message = "XMLRPC request failed [code: {code}]\n{string}".format(
- code=fault.faultCode,
- string=fault.faultString,
- )
- raise CommandError(message)
- assert isinstance(hits, list)
- return hits
-
-
-def transform_hits(hits: List[Dict[str, str]]) -> List["TransformedHit"]:
- """
- The list from pypi is really a list of versions. We want a list of
- packages with the list of versions stored inline. This converts the
- list from pypi into one we can use.
- """
- packages: Dict[str, "TransformedHit"] = OrderedDict()
- for hit in hits:
- name = hit["name"]
- summary = hit["summary"]
- version = hit["version"]
-
- if name not in packages.keys():
- packages[name] = {
- "name": name,
- "summary": summary,
- "versions": [version],
- }
- else:
- packages[name]["versions"].append(version)
-
- # if this is the highest version, replace summary and score
- if version == highest_version(packages[name]["versions"]):
- packages[name]["summary"] = summary
-
- return list(packages.values())
-
-
-def print_dist_installation_info(name: str, latest: str) -> None:
- env = get_default_environment()
- dist = env.get_distribution(name)
- if dist is not None:
- with indent_log():
- if dist.version == latest:
- write_output("INSTALLED: %s (latest)", dist.version)
- else:
- write_output("INSTALLED: %s", dist.version)
- if parse_version(latest).pre:
- write_output(
- "LATEST: %s (pre-release; install"
- " with `pip install --pre`)",
- latest,
- )
- else:
- write_output("LATEST: %s", latest)
-
-
-def print_results(
- hits: List["TransformedHit"],
- name_column_width: Optional[int] = None,
- terminal_width: Optional[int] = None,
-) -> None:
- if not hits:
- return
- if name_column_width is None:
- name_column_width = (
- max(
- [
- len(hit["name"]) + len(highest_version(hit.get("versions", ["-"])))
- for hit in hits
- ]
- )
- + 4
- )
-
- for hit in hits:
- name = hit["name"]
- summary = hit["summary"] or ""
- latest = highest_version(hit.get("versions", ["-"]))
- if terminal_width is not None:
- target_width = terminal_width - name_column_width - 5
- if target_width > 10:
- # wrap and indent summary to fit terminal
- summary_lines = textwrap.wrap(summary, target_width)
- summary = ("\n" + " " * (name_column_width + 3)).join(summary_lines)
-
- name_latest = f"{name} ({latest})"
- line = f"{name_latest:{name_column_width}} - {summary}"
- try:
- write_output(line)
- print_dist_installation_info(name, latest)
- except UnicodeEncodeError:
- pass
-
-
-def highest_version(versions: List[str]) -> str:
- return max(versions, key=parse_version)
diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/padding.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/padding.py
deleted file mode 100644
index 1b2204f59f2ce4d9c8f2cca85326e4d81f8805bb..0000000000000000000000000000000000000000
--- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/rich/padding.py
+++ /dev/null
@@ -1,141 +0,0 @@
-from typing import cast, List, Optional, Tuple, TYPE_CHECKING, Union
-
-if TYPE_CHECKING:
- from .console import (
- Console,
- ConsoleOptions,
- RenderableType,
- RenderResult,
- )
-from .jupyter import JupyterMixin
-from .measure import Measurement
-from .style import Style
-from .segment import Segment
-
-
-PaddingDimensions = Union[int, Tuple[int], Tuple[int, int], Tuple[int, int, int, int]]
-
-
-class Padding(JupyterMixin):
- """Draw space around content.
-
- Example:
- >>> print(Padding("Hello", (2, 4), style="on blue"))
-
- Args:
- renderable (RenderableType): String or other renderable.
- pad (Union[int, Tuple[int]]): Padding for top, right, bottom, and left borders.
- May be specified with 1, 2, or 4 integers (CSS style).
- style (Union[str, Style], optional): Style for padding characters. Defaults to "none".
- expand (bool, optional): Expand padding to fit available width. Defaults to True.
- """
-
- def __init__(
- self,
- renderable: "RenderableType",
- pad: "PaddingDimensions" = (0, 0, 0, 0),
- *,
- style: Union[str, Style] = "none",
- expand: bool = True,
- ):
- self.renderable = renderable
- self.top, self.right, self.bottom, self.left = self.unpack(pad)
- self.style = style
- self.expand = expand
-
- @classmethod
- def indent(cls, renderable: "RenderableType", level: int) -> "Padding":
- """Make padding instance to render an indent.
-
- Args:
- renderable (RenderableType): String or other renderable.
- level (int): Number of characters to indent.
-
- Returns:
- Padding: A Padding instance.
- """
-
- return Padding(renderable, pad=(0, 0, 0, level), expand=False)
-
- @staticmethod
- def unpack(pad: "PaddingDimensions") -> Tuple[int, int, int, int]:
- """Unpack padding specified in CSS style."""
- if isinstance(pad, int):
- return (pad, pad, pad, pad)
- if len(pad) == 1:
- _pad = pad[0]
- return (_pad, _pad, _pad, _pad)
- if len(pad) == 2:
- pad_top, pad_right = cast(Tuple[int, int], pad)
- return (pad_top, pad_right, pad_top, pad_right)
- if len(pad) == 4:
- top, right, bottom, left = cast(Tuple[int, int, int, int], pad)
- return (top, right, bottom, left)
- raise ValueError(f"1, 2 or 4 integers required for padding; {len(pad)} given")
-
- def __repr__(self) -> str:
- return f"Padding({self.renderable!r}, ({self.top},{self.right},{self.bottom},{self.left}))"
-
- def __rich_console__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> "RenderResult":
- style = console.get_style(self.style)
- if self.expand:
- width = options.max_width
- else:
- width = min(
- Measurement.get(console, options, self.renderable).maximum
- + self.left
- + self.right,
- options.max_width,
- )
- render_options = options.update_width(width - self.left - self.right)
- if render_options.height is not None:
- render_options = render_options.update_height(
- height=render_options.height - self.top - self.bottom
- )
- lines = console.render_lines(
- self.renderable, render_options, style=style, pad=True
- )
- _Segment = Segment
-
- left = _Segment(" " * self.left, style) if self.left else None
- right = (
- [_Segment(f'{" " * self.right}', style), _Segment.line()]
- if self.right
- else [_Segment.line()]
- )
- blank_line: Optional[List[Segment]] = None
- if self.top:
- blank_line = [_Segment(f'{" " * width}\n', style)]
- yield from blank_line * self.top
- if left:
- for line in lines:
- yield left
- yield from line
- yield from right
- else:
- for line in lines:
- yield from line
- yield from right
- if self.bottom:
- blank_line = blank_line or [_Segment(f'{" " * width}\n', style)]
- yield from blank_line * self.bottom
-
- def __rich_measure__(
- self, console: "Console", options: "ConsoleOptions"
- ) -> "Measurement":
- max_width = options.max_width
- extra_width = self.left + self.right
- if max_width - extra_width < 1:
- return Measurement(max_width, max_width)
- measure_min, measure_max = Measurement.get(console, options, self.renderable)
- measurement = Measurement(measure_min + extra_width, measure_max + extra_width)
- measurement = measurement.with_maximum(max_width)
- return measurement
-
-
-if __name__ == "__main__": # pragma: no cover
- from pip._vendor.rich import print
-
- print(Padding("Hello, World", (2, 4), style="on blue"))
diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/src/common/pa_converters.h b/spaces/amarchheda/ChordDuplicate/portaudio/src/common/pa_converters.h
deleted file mode 100644
index 96edf1c92cf4cb4e61b50a9d3cbf4a0577ffbeba..0000000000000000000000000000000000000000
--- a/spaces/amarchheda/ChordDuplicate/portaudio/src/common/pa_converters.h
+++ /dev/null
@@ -1,263 +0,0 @@
-#ifndef PA_CONVERTERS_H
-#define PA_CONVERTERS_H
-/*
- * $Id$
- * Portable Audio I/O Library sample conversion mechanism
- *
- * Based on the Open Source API proposed by Ross Bencina
- * Copyright (c) 1999-2002 Phil Burk, Ross Bencina
- *
- * Permission is hereby granted, free of charge, to any person obtaining
- * a copy of this software and associated documentation files
- * (the "Software"), to deal in the Software without restriction,
- * including without limitation the rights to use, copy, modify, merge,
- * publish, distribute, sublicense, and/or sell copies of the Software,
- * and to permit persons to whom the Software is furnished to do so,
- * subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be
- * included in all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
- * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR
- * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
- * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
- * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
- */
-
-/*
- * The text above constitutes the entire PortAudio license; however,
- * the PortAudio community also makes the following non-binding requests:
- *
- * Any person wishing to distribute modifications to the Software is
- * requested to send the modifications to the original developer so that
- * they can be incorporated into the canonical version. It is also
- * requested that these non-binding requests be included along with the
- * license above.
- */
-
-/** @file
- @ingroup common_src
-
- @brief Conversion functions used to convert buffers of samples from one
- format to another.
-*/
-
-
-#include "portaudio.h" /* for PaSampleFormat */
-
-#ifdef __cplusplus
-extern "C"
-{
-#endif /* __cplusplus */
-
-
-struct PaUtilTriangularDitherGenerator;
-
-
-/** Choose an available sample format which is most appropriate for
- representing the requested format. If the requested format is not available
- higher quality formats are considered before lower quality formats.
- @param availableFormats A variable containing the logical OR of all available
- formats.
- @param format The desired format.
- @return The most appropriate available format for representing the requested
- format.
-*/
-PaSampleFormat PaUtil_SelectClosestAvailableFormat(
- PaSampleFormat availableFormats, PaSampleFormat format );
-
-
-/* high level conversions functions for use by implementations */
-
-
-/** The generic sample converter prototype. Sample converters convert count
- samples from sourceBuffer to destinationBuffer. The actual type of the data
- pointed to by these parameters varys for different converter functions.
- @param destinationBuffer A pointer to the first sample of the destination.
- @param destinationStride An offset between successive destination samples
- expressed in samples (not bytes.) It may be negative.
- @param sourceBuffer A pointer to the first sample of the source.
- @param sourceStride An offset between successive source samples
- expressed in samples (not bytes.) It may be negative.
- @param count The number of samples to convert.
- @param ditherState State information used to calculate dither. Converters
- that do not perform dithering will ignore this parameter, in which case
- NULL or invalid dither state may be passed.
-*/
-typedef void PaUtilConverter(
- void *destinationBuffer, signed int destinationStride,
- void *sourceBuffer, signed int sourceStride,
- unsigned int count, struct PaUtilTriangularDitherGenerator *ditherGenerator );
-
-
-/** Find a sample converter function for the given source and destinations
- formats and flags (clip and dither.)
- @return
- A pointer to a PaUtilConverter which will perform the requested
- conversion, or NULL if the given format conversion is not supported.
- For conversions where clipping or dithering is not necessary, the
- clip and dither flags are ignored and a non-clipping or dithering
- version is returned.
- If the source and destination formats are the same, a function which
- copies data of the appropriate size will be returned.
-*/
-PaUtilConverter* PaUtil_SelectConverter( PaSampleFormat sourceFormat,
- PaSampleFormat destinationFormat, PaStreamFlags flags );
-
-
-/** The generic buffer zeroer prototype. Buffer zeroers copy count zeros to
- destinationBuffer. The actual type of the data pointed to varys for
- different zeroer functions.
- @param destinationBuffer A pointer to the first sample of the destination.
- @param destinationStride An offset between successive destination samples
- expressed in samples (not bytes.) It may be negative.
- @param count The number of samples to zero.
-*/
-typedef void PaUtilZeroer(
- void *destinationBuffer, signed int destinationStride, unsigned int count );
-
-
-/** Find a buffer zeroer function for the given destination format.
- @return
- A pointer to a PaUtilZeroer which will perform the requested
- zeroing.
-*/
-PaUtilZeroer* PaUtil_SelectZeroer( PaSampleFormat destinationFormat );
-
-/*----------------------------------------------------------------------------*/
-/* low level functions and data structures which may be used for
- substituting conversion functions */
-
-
-/** The type used to store all sample conversion functions.
- @see paConverters;
-*/
-typedef struct{
- PaUtilConverter *Float32_To_Int32;
- PaUtilConverter *Float32_To_Int32_Dither;
- PaUtilConverter *Float32_To_Int32_Clip;
- PaUtilConverter *Float32_To_Int32_DitherClip;
-
- PaUtilConverter *Float32_To_Int24;
- PaUtilConverter *Float32_To_Int24_Dither;
- PaUtilConverter *Float32_To_Int24_Clip;
- PaUtilConverter *Float32_To_Int24_DitherClip;
-
- PaUtilConverter *Float32_To_Int16;
- PaUtilConverter *Float32_To_Int16_Dither;
- PaUtilConverter *Float32_To_Int16_Clip;
- PaUtilConverter *Float32_To_Int16_DitherClip;
-
- PaUtilConverter *Float32_To_Int8;
- PaUtilConverter *Float32_To_Int8_Dither;
- PaUtilConverter *Float32_To_Int8_Clip;
- PaUtilConverter *Float32_To_Int8_DitherClip;
-
- PaUtilConverter *Float32_To_UInt8;
- PaUtilConverter *Float32_To_UInt8_Dither;
- PaUtilConverter *Float32_To_UInt8_Clip;
- PaUtilConverter *Float32_To_UInt8_DitherClip;
-
- PaUtilConverter *Int32_To_Float32;
- PaUtilConverter *Int32_To_Int24;
- PaUtilConverter *Int32_To_Int24_Dither;
- PaUtilConverter *Int32_To_Int16;
- PaUtilConverter *Int32_To_Int16_Dither;
- PaUtilConverter *Int32_To_Int8;
- PaUtilConverter *Int32_To_Int8_Dither;
- PaUtilConverter *Int32_To_UInt8;
- PaUtilConverter *Int32_To_UInt8_Dither;
-
- PaUtilConverter *Int24_To_Float32;
- PaUtilConverter *Int24_To_Int32;
- PaUtilConverter *Int24_To_Int16;
- PaUtilConverter *Int24_To_Int16_Dither;
- PaUtilConverter *Int24_To_Int8;
- PaUtilConverter *Int24_To_Int8_Dither;
- PaUtilConverter *Int24_To_UInt8;
- PaUtilConverter *Int24_To_UInt8_Dither;
-
- PaUtilConverter *Int16_To_Float32;
- PaUtilConverter *Int16_To_Int32;
- PaUtilConverter *Int16_To_Int24;
- PaUtilConverter *Int16_To_Int8;
- PaUtilConverter *Int16_To_Int8_Dither;
- PaUtilConverter *Int16_To_UInt8;
- PaUtilConverter *Int16_To_UInt8_Dither;
-
- PaUtilConverter *Int8_To_Float32;
- PaUtilConverter *Int8_To_Int32;
- PaUtilConverter *Int8_To_Int24;
- PaUtilConverter *Int8_To_Int16;
- PaUtilConverter *Int8_To_UInt8;
-
- PaUtilConverter *UInt8_To_Float32;
- PaUtilConverter *UInt8_To_Int32;
- PaUtilConverter *UInt8_To_Int24;
- PaUtilConverter *UInt8_To_Int16;
- PaUtilConverter *UInt8_To_Int8;
-
- PaUtilConverter *Copy_8_To_8; /* copy without any conversion */
- PaUtilConverter *Copy_16_To_16; /* copy without any conversion */
- PaUtilConverter *Copy_24_To_24; /* copy without any conversion */
- PaUtilConverter *Copy_32_To_32; /* copy without any conversion */
-} PaUtilConverterTable;
-
-
-/** A table of pointers to all required converter functions.
- PaUtil_SelectConverter() uses this table to lookup the appropriate
- conversion functions. The fields of this structure are initialized
- with default conversion functions. Fields may be NULL, indicating that
- no conversion function is available. User code may substitute optimised
- conversion functions by assigning different function pointers to
- these fields.
-
- @note
- If the PA_NO_STANDARD_CONVERTERS preprocessor variable is defined,
- PortAudio's standard converters will not be compiled, and all fields
- of this structure will be initialized to NULL. In such cases, users
- should supply their own conversion functions if the require PortAudio
- to open a stream that requires sample conversion.
-
- @see PaUtilConverterTable, PaUtilConverter, PaUtil_SelectConverter
-*/
-extern PaUtilConverterTable paConverters;
-
-
-/** The type used to store all buffer zeroing functions.
- @see paZeroers;
-*/
-typedef struct{
- PaUtilZeroer *ZeroU8; /* unsigned 8 bit, zero == 128 */
- PaUtilZeroer *Zero8;
- PaUtilZeroer *Zero16;
- PaUtilZeroer *Zero24;
- PaUtilZeroer *Zero32;
-} PaUtilZeroerTable;
-
-
-/** A table of pointers to all required zeroer functions.
- PaUtil_SelectZeroer() uses this table to lookup the appropriate
- conversion functions. The fields of this structure are initialized
- with default conversion functions. User code may substitute optimised
- conversion functions by assigning different function pointers to
- these fields.
-
- @note
- If the PA_NO_STANDARD_ZEROERS preprocessor variable is defined,
- PortAudio's standard zeroers will not be compiled, and all fields
- of this structure will be initialized to NULL. In such cases, users
- should supply their own zeroing functions for the sample sizes which
- they intend to use.
-
- @see PaUtilZeroerTable, PaUtilZeroer, PaUtil_SelectZeroer
-*/
-extern PaUtilZeroerTable paZeroers;
-
-#ifdef __cplusplus
-}
-#endif /* __cplusplus */
-#endif /* PA_CONVERTERS_H */
diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/src/common/pa_cpuload.c b/spaces/amarchheda/ChordDuplicate/portaudio/src/common/pa_cpuload.c
deleted file mode 100644
index de57db26ab281e5af9f5d0370cc5dc24bc247ba6..0000000000000000000000000000000000000000
--- a/spaces/amarchheda/ChordDuplicate/portaudio/src/common/pa_cpuload.c
+++ /dev/null
@@ -1,105 +0,0 @@
-/*
- * $Id$
- * Portable Audio I/O Library CPU Load measurement functions
- * Portable CPU load measurement facility.
- *
- * Based on the Open Source API proposed by Ross Bencina
- * Copyright (c) 2002 Ross Bencina
- *
- * Permission is hereby granted, free of charge, to any person obtaining
- * a copy of this software and associated documentation files
- * (the "Software"), to deal in the Software without restriction,
- * including without limitation the rights to use, copy, modify, merge,
- * publish, distribute, sublicense, and/or sell copies of the Software,
- * and to permit persons to whom the Software is furnished to do so,
- * subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be
- * included in all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
- * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
- * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
- * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR
- * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
- * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
- * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
- */
-
-/*
- * The text above constitutes the entire PortAudio license; however,
- * the PortAudio community also makes the following non-binding requests:
- *
- * Any person wishing to distribute modifications to the Software is
- * requested to send the modifications to the original developer so that
- * they can be incorporated into the canonical version. It is also
- * requested that these non-binding requests be included along with the
- * license above.
- */
-
-/** @file
- @ingroup common_src
-
- @brief Functions to assist in measuring the CPU utilization of a callback
- stream. Used to implement the Pa_GetStreamCpuLoad() function.
-
- @todo Dynamically calculate the coefficients used to smooth the CPU Load
- Measurements over time to provide a uniform characterisation of CPU Load
- independent of rate at which PaUtil_BeginCpuLoadMeasurement /
- PaUtil_EndCpuLoadMeasurement are called. see http://www.portaudio.com/trac/ticket/113
-*/
-
-
-#include "pa_cpuload.h"
-
-#include
-
-#include "pa_util.h" /* for PaUtil_GetTime() */
-
-
-void PaUtil_InitializeCpuLoadMeasurer( PaUtilCpuLoadMeasurer* measurer, double sampleRate )
-{
- assert( sampleRate > 0 );
-
- measurer->samplingPeriod = 1. / sampleRate;
- measurer->averageLoad = 0.;
-}
-
-void PaUtil_ResetCpuLoadMeasurer( PaUtilCpuLoadMeasurer* measurer )
-{
- measurer->averageLoad = 0.;
-}
-
-void PaUtil_BeginCpuLoadMeasurement( PaUtilCpuLoadMeasurer* measurer )
-{
- measurer->measurementStartTime = PaUtil_GetTime();
-}
-
-
-void PaUtil_EndCpuLoadMeasurement( PaUtilCpuLoadMeasurer* measurer, unsigned long framesProcessed )
-{
- double measurementEndTime, secondsFor100Percent, measuredLoad;
-
- if( framesProcessed > 0 ){
- measurementEndTime = PaUtil_GetTime();
-
- assert( framesProcessed > 0 );
- secondsFor100Percent = framesProcessed * measurer->samplingPeriod;
-
- measuredLoad = (measurementEndTime - measurer->measurementStartTime) / secondsFor100Percent;
-
- /* Low pass filter the calculated CPU load to reduce jitter using a simple IIR low pass filter. */
- /** FIXME @todo these coefficients shouldn't be hardwired see: http://www.portaudio.com/trac/ticket/113 */
-#define LOWPASS_COEFFICIENT_0 (0.9)
-#define LOWPASS_COEFFICIENT_1 (0.99999 - LOWPASS_COEFFICIENT_0)
-
- measurer->averageLoad = (LOWPASS_COEFFICIENT_0 * measurer->averageLoad) +
- (LOWPASS_COEFFICIENT_1 * measuredLoad);
- }
-}
-
-
-double PaUtil_GetCpuLoad( PaUtilCpuLoadMeasurer* measurer )
-{
- return measurer->averageLoad;
-}
diff --git a/spaces/anaclaudia13ct/insect_detection/export.py b/spaces/anaclaudia13ct/insect_detection/export.py
deleted file mode 100644
index 928992903b0b2a0e2a4fb072a4a373b8afddf208..0000000000000000000000000000000000000000
--- a/spaces/anaclaudia13ct/insect_detection/export.py
+++ /dev/null
@@ -1,653 +0,0 @@
-# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
-"""
-Export a YOLOv5 PyTorch model to other formats. TensorFlow exports authored by https://github.com/zldrobit
-
-Format | `export.py --include` | Model
---- | --- | ---
-PyTorch | - | yolov5s.pt
-TorchScript | `torchscript` | yolov5s.torchscript
-ONNX | `onnx` | yolov5s.onnx
-OpenVINO | `openvino` | yolov5s_openvino_model/
-TensorRT | `engine` | yolov5s.engine
-CoreML | `coreml` | yolov5s.mlmodel
-TensorFlow SavedModel | `saved_model` | yolov5s_saved_model/
-TensorFlow GraphDef | `pb` | yolov5s.pb
-TensorFlow Lite | `tflite` | yolov5s.tflite
-TensorFlow Edge TPU | `edgetpu` | yolov5s_edgetpu.tflite
-TensorFlow.js | `tfjs` | yolov5s_web_model/
-PaddlePaddle | `paddle` | yolov5s_paddle_model/
-
-Requirements:
- $ pip install -r requirements.txt coremltools onnx onnx-simplifier onnxruntime openvino-dev tensorflow-cpu # CPU
- $ pip install -r requirements.txt coremltools onnx onnx-simplifier onnxruntime-gpu openvino-dev tensorflow # GPU
-
-Usage:
- $ python export.py --weights yolov5s.pt --include torchscript onnx openvino engine coreml tflite ...
-
-Inference:
- $ python detect.py --weights yolov5s.pt # PyTorch
- yolov5s.torchscript # TorchScript
- yolov5s.onnx # ONNX Runtime or OpenCV DNN with --dnn
- yolov5s_openvino_model # OpenVINO
- yolov5s.engine # TensorRT
- yolov5s.mlmodel # CoreML (macOS-only)
- yolov5s_saved_model # TensorFlow SavedModel
- yolov5s.pb # TensorFlow GraphDef
- yolov5s.tflite # TensorFlow Lite
- yolov5s_edgetpu.tflite # TensorFlow Edge TPU
- yolov5s_paddle_model # PaddlePaddle
-
-TensorFlow.js:
- $ cd .. && git clone https://github.com/zldrobit/tfjs-yolov5-example.git && cd tfjs-yolov5-example
- $ npm install
- $ ln -s ../../yolov5/yolov5s_web_model public/yolov5s_web_model
- $ npm start
-"""
-
-import argparse
-import contextlib
-import json
-import os
-import platform
-import re
-import subprocess
-import sys
-import time
-import warnings
-from pathlib import Path
-
-import pandas as pd
-import torch
-from torch.utils.mobile_optimizer import optimize_for_mobile
-
-FILE = Path(__file__).resolve()
-ROOT = FILE.parents[0] # YOLOv5 root directory
-if str(ROOT) not in sys.path:
- sys.path.append(str(ROOT)) # add ROOT to PATH
-if platform.system() != 'Windows':
- ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
-
-from models.experimental import attempt_load
-from models.yolo import ClassificationModel, Detect, DetectionModel, SegmentationModel
-from utils.dataloaders import LoadImages
-from utils.general import (LOGGER, Profile, check_dataset, check_img_size, check_requirements, check_version,
- check_yaml, colorstr, file_size, get_default_args, print_args, url2file, yaml_save)
-from utils.torch_utils import select_device, smart_inference_mode
-
-MACOS = platform.system() == 'Darwin' # macOS environment
-
-
-def export_formats():
- # YOLOv5 export formats
- x = [
- ['PyTorch', '-', '.pt', True, True],
- ['TorchScript', 'torchscript', '.torchscript', True, True],
- ['ONNX', 'onnx', '.onnx', True, True],
- ['OpenVINO', 'openvino', '_openvino_model', True, False],
- ['TensorRT', 'engine', '.engine', False, True],
- ['CoreML', 'coreml', '.mlmodel', True, False],
- ['TensorFlow SavedModel', 'saved_model', '_saved_model', True, True],
- ['TensorFlow GraphDef', 'pb', '.pb', True, True],
- ['TensorFlow Lite', 'tflite', '.tflite', True, False],
- ['TensorFlow Edge TPU', 'edgetpu', '_edgetpu.tflite', False, False],
- ['TensorFlow.js', 'tfjs', '_web_model', False, False],
- ['PaddlePaddle', 'paddle', '_paddle_model', True, True],]
- return pd.DataFrame(x, columns=['Format', 'Argument', 'Suffix', 'CPU', 'GPU'])
-
-
-def try_export(inner_func):
- # YOLOv5 export decorator, i..e @try_export
- inner_args = get_default_args(inner_func)
-
- def outer_func(*args, **kwargs):
- prefix = inner_args['prefix']
- try:
- with Profile() as dt:
- f, model = inner_func(*args, **kwargs)
- LOGGER.info(f'{prefix} export success ✅ {dt.t:.1f}s, saved as {f} ({file_size(f):.1f} MB)')
- return f, model
- except Exception as e:
- LOGGER.info(f'{prefix} export failure ❌ {dt.t:.1f}s: {e}')
- return None, None
-
- return outer_func
-
-
-@try_export
-def export_torchscript(model, im, file, optimize, prefix=colorstr('TorchScript:')):
- # YOLOv5 TorchScript model export
- LOGGER.info(f'\n{prefix} starting export with torch {torch.__version__}...')
- f = file.with_suffix('.torchscript')
-
- ts = torch.jit.trace(model, im, strict=False)
- d = {"shape": im.shape, "stride": int(max(model.stride)), "names": model.names}
- extra_files = {'config.txt': json.dumps(d)} # torch._C.ExtraFilesMap()
- if optimize: # https://pytorch.org/tutorials/recipes/mobile_interpreter.html
- optimize_for_mobile(ts)._save_for_lite_interpreter(str(f), _extra_files=extra_files)
- else:
- ts.save(str(f), _extra_files=extra_files)
- return f, None
-
-
-@try_export
-def export_onnx(model, im, file, opset, dynamic, simplify, prefix=colorstr('ONNX:')):
- # YOLOv5 ONNX export
- check_requirements('onnx')
- import onnx
-
- LOGGER.info(f'\n{prefix} starting export with onnx {onnx.__version__}...')
- f = file.with_suffix('.onnx')
-
- output_names = ['output0', 'output1'] if isinstance(model, SegmentationModel) else ['output0']
- if dynamic:
- dynamic = {'images': {0: 'batch', 2: 'height', 3: 'width'}} # shape(1,3,640,640)
- if isinstance(model, SegmentationModel):
- dynamic['output0'] = {0: 'batch', 1: 'anchors'} # shape(1,25200,85)
- dynamic['output1'] = {0: 'batch', 2: 'mask_height', 3: 'mask_width'} # shape(1,32,160,160)
- elif isinstance(model, DetectionModel):
- dynamic['output0'] = {0: 'batch', 1: 'anchors'} # shape(1,25200,85)
-
- torch.onnx.export(
- model.cpu() if dynamic else model, # --dynamic only compatible with cpu
- im.cpu() if dynamic else im,
- f,
- verbose=False,
- opset_version=opset,
- do_constant_folding=True, # WARNING: DNN inference with torch>=1.12 may require do_constant_folding=False
- input_names=['images'],
- output_names=output_names,
- dynamic_axes=dynamic or None)
-
- # Checks
- model_onnx = onnx.load(f) # load onnx model
- onnx.checker.check_model(model_onnx) # check onnx model
-
- # Metadata
- d = {'stride': int(max(model.stride)), 'names': model.names}
- for k, v in d.items():
- meta = model_onnx.metadata_props.add()
- meta.key, meta.value = k, str(v)
- onnx.save(model_onnx, f)
-
- # Simplify
- if simplify:
- try:
- cuda = torch.cuda.is_available()
- check_requirements(('onnxruntime-gpu' if cuda else 'onnxruntime', 'onnx-simplifier>=0.4.1'))
- import onnxsim
-
- LOGGER.info(f'{prefix} simplifying with onnx-simplifier {onnxsim.__version__}...')
- model_onnx, check = onnxsim.simplify(model_onnx)
- assert check, 'assert check failed'
- onnx.save(model_onnx, f)
- except Exception as e:
- LOGGER.info(f'{prefix} simplifier failure: {e}')
- return f, model_onnx
-
-
-@try_export
-def export_openvino(file, metadata, half, prefix=colorstr('OpenVINO:')):
- # YOLOv5 OpenVINO export
- check_requirements('openvino-dev') # requires openvino-dev: https://pypi.org/project/openvino-dev/
- import openvino.inference_engine as ie
-
- LOGGER.info(f'\n{prefix} starting export with openvino {ie.__version__}...')
- f = str(file).replace('.pt', f'_openvino_model{os.sep}')
-
- cmd = f"mo --input_model {file.with_suffix('.onnx')} --output_dir {f} --data_type {'FP16' if half else 'FP32'}"
- subprocess.run(cmd.split(), check=True, env=os.environ) # export
- yaml_save(Path(f) / file.with_suffix('.yaml').name, metadata) # add metadata.yaml
- return f, None
-
-
-@try_export
-def export_paddle(model, im, file, metadata, prefix=colorstr('PaddlePaddle:')):
- # YOLOv5 Paddle export
- check_requirements(('paddlepaddle', 'x2paddle'))
- import x2paddle
- from x2paddle.convert import pytorch2paddle
-
- LOGGER.info(f'\n{prefix} starting export with X2Paddle {x2paddle.__version__}...')
- f = str(file).replace('.pt', f'_paddle_model{os.sep}')
-
- pytorch2paddle(module=model, save_dir=f, jit_type='trace', input_examples=[im]) # export
- yaml_save(Path(f) / file.with_suffix('.yaml').name, metadata) # add metadata.yaml
- return f, None
-
-
-@try_export
-def export_coreml(model, im, file, int8, half, prefix=colorstr('CoreML:')):
- # YOLOv5 CoreML export
- check_requirements('coremltools')
- import coremltools as ct
-
- LOGGER.info(f'\n{prefix} starting export with coremltools {ct.__version__}...')
- f = file.with_suffix('.mlmodel')
-
- ts = torch.jit.trace(model, im, strict=False) # TorchScript model
- ct_model = ct.convert(ts, inputs=[ct.ImageType('image', shape=im.shape, scale=1 / 255, bias=[0, 0, 0])])
- bits, mode = (8, 'kmeans_lut') if int8 else (16, 'linear') if half else (32, None)
- if bits < 32:
- if MACOS: # quantization only supported on macOS
- with warnings.catch_warnings():
- warnings.filterwarnings("ignore", category=DeprecationWarning) # suppress numpy==1.20 float warning
- ct_model = ct.models.neural_network.quantization_utils.quantize_weights(ct_model, bits, mode)
- else:
- print(f'{prefix} quantization only supported on macOS, skipping...')
- ct_model.save(f)
- return f, ct_model
-
-
-@try_export
-def export_engine(model, im, file, half, dynamic, simplify, workspace=4, verbose=False, prefix=colorstr('TensorRT:')):
- # YOLOv5 TensorRT export https://developer.nvidia.com/tensorrt
- assert im.device.type != 'cpu', 'export running on CPU but must be on GPU, i.e. `python export.py --device 0`'
- try:
- import tensorrt as trt
- except Exception:
- if platform.system() == 'Linux':
- check_requirements('nvidia-tensorrt', cmds='-U --index-url https://pypi.ngc.nvidia.com')
- import tensorrt as trt
-
- if trt.__version__[0] == '7': # TensorRT 7 handling https://github.com/ultralytics/yolov5/issues/6012
- grid = model.model[-1].anchor_grid
- model.model[-1].anchor_grid = [a[..., :1, :1, :] for a in grid]
- export_onnx(model, im, file, 12, dynamic, simplify) # opset 12
- model.model[-1].anchor_grid = grid
- else: # TensorRT >= 8
- check_version(trt.__version__, '8.0.0', hard=True) # require tensorrt>=8.0.0
- export_onnx(model, im, file, 12, dynamic, simplify) # opset 12
- onnx = file.with_suffix('.onnx')
-
- LOGGER.info(f'\n{prefix} starting export with TensorRT {trt.__version__}...')
- assert onnx.exists(), f'failed to export ONNX file: {onnx}'
- f = file.with_suffix('.engine') # TensorRT engine file
- logger = trt.Logger(trt.Logger.INFO)
- if verbose:
- logger.min_severity = trt.Logger.Severity.VERBOSE
-
- builder = trt.Builder(logger)
- config = builder.create_builder_config()
- config.max_workspace_size = workspace * 1 << 30
- # config.set_memory_pool_limit(trt.MemoryPoolType.WORKSPACE, workspace << 30) # fix TRT 8.4 deprecation notice
-
- flag = (1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH))
- network = builder.create_network(flag)
- parser = trt.OnnxParser(network, logger)
- if not parser.parse_from_file(str(onnx)):
- raise RuntimeError(f'failed to load ONNX file: {onnx}')
-
- inputs = [network.get_input(i) for i in range(network.num_inputs)]
- outputs = [network.get_output(i) for i in range(network.num_outputs)]
- for inp in inputs:
- LOGGER.info(f'{prefix} input "{inp.name}" with shape{inp.shape} {inp.dtype}')
- for out in outputs:
- LOGGER.info(f'{prefix} output "{out.name}" with shape{out.shape} {out.dtype}')
-
- if dynamic:
- if im.shape[0] <= 1:
- LOGGER.warning(f"{prefix} WARNING ⚠️ --dynamic model requires maximum --batch-size argument")
- profile = builder.create_optimization_profile()
- for inp in inputs:
- profile.set_shape(inp.name, (1, *im.shape[1:]), (max(1, im.shape[0] // 2), *im.shape[1:]), im.shape)
- config.add_optimization_profile(profile)
-
- LOGGER.info(f'{prefix} building FP{16 if builder.platform_has_fast_fp16 and half else 32} engine as {f}')
- if builder.platform_has_fast_fp16 and half:
- config.set_flag(trt.BuilderFlag.FP16)
- with builder.build_engine(network, config) as engine, open(f, 'wb') as t:
- t.write(engine.serialize())
- return f, None
-
-
-@try_export
-def export_saved_model(model,
- im,
- file,
- dynamic,
- tf_nms=False,
- agnostic_nms=False,
- topk_per_class=100,
- topk_all=100,
- iou_thres=0.45,
- conf_thres=0.25,
- keras=False,
- prefix=colorstr('TensorFlow SavedModel:')):
- # YOLOv5 TensorFlow SavedModel export
- try:
- import tensorflow as tf
- except Exception:
- check_requirements(f"tensorflow{'' if torch.cuda.is_available() else '-macos' if MACOS else '-cpu'}")
- import tensorflow as tf
- from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
-
- from models.tf import TFModel
-
- LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')
- f = str(file).replace('.pt', '_saved_model')
- batch_size, ch, *imgsz = list(im.shape) # BCHW
-
- tf_model = TFModel(cfg=model.yaml, model=model, nc=model.nc, imgsz=imgsz)
- im = tf.zeros((batch_size, *imgsz, ch)) # BHWC order for TensorFlow
- _ = tf_model.predict(im, tf_nms, agnostic_nms, topk_per_class, topk_all, iou_thres, conf_thres)
- inputs = tf.keras.Input(shape=(*imgsz, ch), batch_size=None if dynamic else batch_size)
- outputs = tf_model.predict(inputs, tf_nms, agnostic_nms, topk_per_class, topk_all, iou_thres, conf_thres)
- keras_model = tf.keras.Model(inputs=inputs, outputs=outputs)
- keras_model.trainable = False
- keras_model.summary()
- if keras:
- keras_model.save(f, save_format='tf')
- else:
- spec = tf.TensorSpec(keras_model.inputs[0].shape, keras_model.inputs[0].dtype)
- m = tf.function(lambda x: keras_model(x)) # full model
- m = m.get_concrete_function(spec)
- frozen_func = convert_variables_to_constants_v2(m)
- tfm = tf.Module()
- tfm.__call__ = tf.function(lambda x: frozen_func(x)[:4] if tf_nms else frozen_func(x), [spec])
- tfm.__call__(im)
- tf.saved_model.save(tfm,
- f,
- options=tf.saved_model.SaveOptions(experimental_custom_gradients=False) if check_version(
- tf.__version__, '2.6') else tf.saved_model.SaveOptions())
- return f, keras_model
-
-
-@try_export
-def export_pb(keras_model, file, prefix=colorstr('TensorFlow GraphDef:')):
- # YOLOv5 TensorFlow GraphDef *.pb export https://github.com/leimao/Frozen_Graph_TensorFlow
- import tensorflow as tf
- from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
-
- LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')
- f = file.with_suffix('.pb')
-
- m = tf.function(lambda x: keras_model(x)) # full model
- m = m.get_concrete_function(tf.TensorSpec(keras_model.inputs[0].shape, keras_model.inputs[0].dtype))
- frozen_func = convert_variables_to_constants_v2(m)
- frozen_func.graph.as_graph_def()
- tf.io.write_graph(graph_or_graph_def=frozen_func.graph, logdir=str(f.parent), name=f.name, as_text=False)
- return f, None
-
-
-@try_export
-def export_tflite(keras_model, im, file, int8, data, nms, agnostic_nms, prefix=colorstr('TensorFlow Lite:')):
- # YOLOv5 TensorFlow Lite export
- import tensorflow as tf
-
- LOGGER.info(f'\n{prefix} starting export with tensorflow {tf.__version__}...')
- batch_size, ch, *imgsz = list(im.shape) # BCHW
- f = str(file).replace('.pt', '-fp16.tflite')
-
- converter = tf.lite.TFLiteConverter.from_keras_model(keras_model)
- converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS]
- converter.target_spec.supported_types = [tf.float16]
- converter.optimizations = [tf.lite.Optimize.DEFAULT]
- if int8:
- from models.tf import representative_dataset_gen
- dataset = LoadImages(check_dataset(check_yaml(data))['train'], img_size=imgsz, auto=False)
- converter.representative_dataset = lambda: representative_dataset_gen(dataset, ncalib=100)
- converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
- converter.target_spec.supported_types = []
- converter.inference_input_type = tf.uint8 # or tf.int8
- converter.inference_output_type = tf.uint8 # or tf.int8
- converter.experimental_new_quantizer = True
- f = str(file).replace('.pt', '-int8.tflite')
- if nms or agnostic_nms:
- converter.target_spec.supported_ops.append(tf.lite.OpsSet.SELECT_TF_OPS)
-
- tflite_model = converter.convert()
- open(f, "wb").write(tflite_model)
- return f, None
-
-
-@try_export
-def export_edgetpu(file, prefix=colorstr('Edge TPU:')):
- # YOLOv5 Edge TPU export https://coral.ai/docs/edgetpu/models-intro/
- cmd = 'edgetpu_compiler --version'
- help_url = 'https://coral.ai/docs/edgetpu/compiler/'
- assert platform.system() == 'Linux', f'export only supported on Linux. See {help_url}'
- if subprocess.run(f'{cmd} >/dev/null', shell=True).returncode != 0:
- LOGGER.info(f'\n{prefix} export requires Edge TPU compiler. Attempting install from {help_url}')
- sudo = subprocess.run('sudo --version >/dev/null', shell=True).returncode == 0 # sudo installed on system
- for c in (
- 'curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -',
- 'echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list',
- 'sudo apt-get update', 'sudo apt-get install edgetpu-compiler'):
- subprocess.run(c if sudo else c.replace('sudo ', ''), shell=True, check=True)
- ver = subprocess.run(cmd, shell=True, capture_output=True, check=True).stdout.decode().split()[-1]
-
- LOGGER.info(f'\n{prefix} starting export with Edge TPU compiler {ver}...')
- f = str(file).replace('.pt', '-int8_edgetpu.tflite') # Edge TPU model
- f_tfl = str(file).replace('.pt', '-int8.tflite') # TFLite model
-
- cmd = f"edgetpu_compiler -s -d -k 10 --out_dir {file.parent} {f_tfl}"
- subprocess.run(cmd.split(), check=True)
- return f, None
-
-
-@try_export
-def export_tfjs(file, prefix=colorstr('TensorFlow.js:')):
- # YOLOv5 TensorFlow.js export
- check_requirements('tensorflowjs')
- import tensorflowjs as tfjs
-
- LOGGER.info(f'\n{prefix} starting export with tensorflowjs {tfjs.__version__}...')
- f = str(file).replace('.pt', '_web_model') # js dir
- f_pb = file.with_suffix('.pb') # *.pb path
- f_json = f'{f}/model.json' # *.json path
-
- cmd = f'tensorflowjs_converter --input_format=tf_frozen_model ' \
- f'--output_node_names=Identity,Identity_1,Identity_2,Identity_3 {f_pb} {f}'
- subprocess.run(cmd.split())
-
- json = Path(f_json).read_text()
- with open(f_json, 'w') as j: # sort JSON Identity_* in ascending order
- subst = re.sub(
- r'{"outputs": {"Identity.?.?": {"name": "Identity.?.?"}, '
- r'"Identity.?.?": {"name": "Identity.?.?"}, '
- r'"Identity.?.?": {"name": "Identity.?.?"}, '
- r'"Identity.?.?": {"name": "Identity.?.?"}}}', r'{"outputs": {"Identity": {"name": "Identity"}, '
- r'"Identity_1": {"name": "Identity_1"}, '
- r'"Identity_2": {"name": "Identity_2"}, '
- r'"Identity_3": {"name": "Identity_3"}}}', json)
- j.write(subst)
- return f, None
-
-
-def add_tflite_metadata(file, metadata, num_outputs):
- # Add metadata to *.tflite models per https://www.tensorflow.org/lite/models/convert/metadata
- with contextlib.suppress(ImportError):
- # check_requirements('tflite_support')
- from tflite_support import flatbuffers
- from tflite_support import metadata as _metadata
- from tflite_support import metadata_schema_py_generated as _metadata_fb
-
- tmp_file = Path('/tmp/meta.txt')
- with open(tmp_file, 'w') as meta_f:
- meta_f.write(str(metadata))
-
- model_meta = _metadata_fb.ModelMetadataT()
- label_file = _metadata_fb.AssociatedFileT()
- label_file.name = tmp_file.name
- model_meta.associatedFiles = [label_file]
-
- subgraph = _metadata_fb.SubGraphMetadataT()
- subgraph.inputTensorMetadata = [_metadata_fb.TensorMetadataT()]
- subgraph.outputTensorMetadata = [_metadata_fb.TensorMetadataT()] * num_outputs
- model_meta.subgraphMetadata = [subgraph]
-
- b = flatbuffers.Builder(0)
- b.Finish(model_meta.Pack(b), _metadata.MetadataPopulator.METADATA_FILE_IDENTIFIER)
- metadata_buf = b.Output()
-
- populator = _metadata.MetadataPopulator.with_model_file(file)
- populator.load_metadata_buffer(metadata_buf)
- populator.load_associated_files([str(tmp_file)])
- populator.populate()
- tmp_file.unlink()
-
-
-@smart_inference_mode()
-def run(
- data=ROOT / 'data/coco128.yaml', # 'dataset.yaml path'
- weights=ROOT / 'yolov5s.pt', # weights path
- imgsz=(640, 640), # image (height, width)
- batch_size=1, # batch size
- device='cpu', # cuda device, i.e. 0 or 0,1,2,3 or cpu
- include=('torchscript', 'onnx'), # include formats
- half=False, # FP16 half-precision export
- inplace=False, # set YOLOv5 Detect() inplace=True
- keras=False, # use Keras
- optimize=False, # TorchScript: optimize for mobile
- int8=False, # CoreML/TF INT8 quantization
- dynamic=False, # ONNX/TF/TensorRT: dynamic axes
- simplify=False, # ONNX: simplify model
- opset=12, # ONNX: opset version
- verbose=False, # TensorRT: verbose log
- workspace=4, # TensorRT: workspace size (GB)
- nms=False, # TF: add NMS to model
- agnostic_nms=False, # TF: add agnostic NMS to model
- topk_per_class=100, # TF.js NMS: topk per class to keep
- topk_all=100, # TF.js NMS: topk for all classes to keep
- iou_thres=0.45, # TF.js NMS: IoU threshold
- conf_thres=0.25, # TF.js NMS: confidence threshold
-):
- t = time.time()
- include = [x.lower() for x in include] # to lowercase
- fmts = tuple(export_formats()['Argument'][1:]) # --include arguments
- flags = [x in include for x in fmts]
- assert sum(flags) == len(include), f'ERROR: Invalid --include {include}, valid --include arguments are {fmts}'
- jit, onnx, xml, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle = flags # export booleans
- file = Path(url2file(weights) if str(weights).startswith(('http:/', 'https:/')) else weights) # PyTorch weights
-
- # Load PyTorch model
- device = select_device(device)
- if half:
- assert device.type != 'cpu' or coreml, '--half only compatible with GPU export, i.e. use --device 0'
- assert not dynamic, '--half not compatible with --dynamic, i.e. use either --half or --dynamic but not both'
- model = attempt_load(weights, device=device, inplace=True, fuse=True) # load FP32 model
-
- # Checks
- imgsz *= 2 if len(imgsz) == 1 else 1 # expand
- if optimize:
- assert device.type == 'cpu', '--optimize not compatible with cuda devices, i.e. use --device cpu'
-
- # Input
- gs = int(max(model.stride)) # grid size (max stride)
- imgsz = [check_img_size(x, gs) for x in imgsz] # verify img_size are gs-multiples
- im = torch.zeros(batch_size, 3, *imgsz).to(device) # image size(1,3,320,192) BCHW iDetection
-
- # Update model
- model.eval()
- for k, m in model.named_modules():
- if isinstance(m, Detect):
- m.inplace = inplace
- m.dynamic = dynamic
- m.export = True
-
- for _ in range(2):
- y = model(im) # dry runs
- if half and not coreml:
- im, model = im.half(), model.half() # to FP16
- shape = tuple((y[0] if isinstance(y, tuple) else y).shape) # model output shape
- metadata = {'stride': int(max(model.stride)), 'names': model.names} # model metadata
- LOGGER.info(f"\n{colorstr('PyTorch:')} starting from {file} with output shape {shape} ({file_size(file):.1f} MB)")
-
- # Exports
- f = [''] * len(fmts) # exported filenames
- warnings.filterwarnings(action='ignore', category=torch.jit.TracerWarning) # suppress TracerWarning
- if jit: # TorchScript
- f[0], _ = export_torchscript(model, im, file, optimize)
- if engine: # TensorRT required before ONNX
- f[1], _ = export_engine(model, im, file, half, dynamic, simplify, workspace, verbose)
- if onnx or xml: # OpenVINO requires ONNX
- f[2], _ = export_onnx(model, im, file, opset, dynamic, simplify)
- if xml: # OpenVINO
- f[3], _ = export_openvino(file, metadata, half)
- if coreml: # CoreML
- f[4], _ = export_coreml(model, im, file, int8, half)
- if any((saved_model, pb, tflite, edgetpu, tfjs)): # TensorFlow formats
- assert not tflite or not tfjs, 'TFLite and TF.js models must be exported separately, please pass only one type.'
- assert not isinstance(model, ClassificationModel), 'ClassificationModel export to TF formats not yet supported.'
- f[5], s_model = export_saved_model(model.cpu(),
- im,
- file,
- dynamic,
- tf_nms=nms or agnostic_nms or tfjs,
- agnostic_nms=agnostic_nms or tfjs,
- topk_per_class=topk_per_class,
- topk_all=topk_all,
- iou_thres=iou_thres,
- conf_thres=conf_thres,
- keras=keras)
- if pb or tfjs: # pb prerequisite to tfjs
- f[6], _ = export_pb(s_model, file)
- if tflite or edgetpu:
- f[7], _ = export_tflite(s_model, im, file, int8 or edgetpu, data=data, nms=nms, agnostic_nms=agnostic_nms)
- if edgetpu:
- f[8], _ = export_edgetpu(file)
- add_tflite_metadata(f[8] or f[7], metadata, num_outputs=len(s_model.outputs))
- if tfjs:
- f[9], _ = export_tfjs(file)
- if paddle: # PaddlePaddle
- f[10], _ = export_paddle(model, im, file, metadata)
-
- # Finish
- f = [str(x) for x in f if x] # filter out '' and None
- if any(f):
- cls, det, seg = (isinstance(model, x) for x in (ClassificationModel, DetectionModel, SegmentationModel)) # type
- det &= not seg # segmentation models inherit from SegmentationModel(DetectionModel)
- dir = Path('segment' if seg else 'classify' if cls else '')
- h = '--half' if half else '' # --half FP16 inference arg
- s = "# WARNING ⚠️ ClassificationModel not yet supported for PyTorch Hub AutoShape inference" if cls else \
- "# WARNING ⚠️ SegmentationModel not yet supported for PyTorch Hub AutoShape inference" if seg else ''
- LOGGER.info(f'\nExport complete ({time.time() - t:.1f}s)'
- f"\nResults saved to {colorstr('bold', file.parent.resolve())}"
- f"\nDetect: python {dir / ('detect.py' if det else 'predict.py')} --weights {f[-1]} {h}"
- f"\nValidate: python {dir / 'val.py'} --weights {f[-1]} {h}"
- f"\nPyTorch Hub: model = torch.hub.load('ultralytics/yolov5', 'custom', '{f[-1]}') {s}"
- f"\nVisualize: https://netron.app")
- return f # return list of exported files/dirs
-
-
-def parse_opt():
- parser = argparse.ArgumentParser()
- parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path')
- parser.add_argument('--weights', nargs='+', type=str, default=ROOT / 'yolov5s.pt', help='model.pt path(s)')
- parser.add_argument('--imgsz', '--img', '--img-size', nargs='+', type=int, default=[640, 640], help='image (h, w)')
- parser.add_argument('--batch-size', type=int, default=1, help='batch size')
- parser.add_argument('--device', default='cpu', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
- parser.add_argument('--half', action='store_true', help='FP16 half-precision export')
- parser.add_argument('--inplace', action='store_true', help='set YOLOv5 Detect() inplace=True')
- parser.add_argument('--keras', action='store_true', help='TF: use Keras')
- parser.add_argument('--optimize', action='store_true', help='TorchScript: optimize for mobile')
- parser.add_argument('--int8', action='store_true', help='CoreML/TF INT8 quantization')
- parser.add_argument('--dynamic', action='store_true', help='ONNX/TF/TensorRT: dynamic axes')
- parser.add_argument('--simplify', action='store_true', help='ONNX: simplify model')
- parser.add_argument('--opset', type=int, default=12, help='ONNX: opset version')
- parser.add_argument('--verbose', action='store_true', help='TensorRT: verbose log')
- parser.add_argument('--workspace', type=int, default=4, help='TensorRT: workspace size (GB)')
- parser.add_argument('--nms', action='store_true', help='TF: add NMS to model')
- parser.add_argument('--agnostic-nms', action='store_true', help='TF: add agnostic NMS to model')
- parser.add_argument('--topk-per-class', type=int, default=100, help='TF.js NMS: topk per class to keep')
- parser.add_argument('--topk-all', type=int, default=100, help='TF.js NMS: topk for all classes to keep')
- parser.add_argument('--iou-thres', type=float, default=0.45, help='TF.js NMS: IoU threshold')
- parser.add_argument('--conf-thres', type=float, default=0.25, help='TF.js NMS: confidence threshold')
- parser.add_argument(
- '--include',
- nargs='+',
- default=['torchscript'],
- help='torchscript, onnx, openvino, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs, paddle')
- opt = parser.parse_args()
- print_args(vars(opt))
- return opt
-
-
-def main(opt):
- for opt.weights in (opt.weights if isinstance(opt.weights, list) else [opt.weights]):
- run(**vars(opt))
-
-
-if __name__ == "__main__":
- opt = parse_opt()
- main(opt)
diff --git a/spaces/aphenx/bingo/src/components/chat-header.tsx b/spaces/aphenx/bingo/src/components/chat-header.tsx
deleted file mode 100644
index c6664b8dee61179f844d45c5bd650518fc2cb4c2..0000000000000000000000000000000000000000
--- a/spaces/aphenx/bingo/src/components/chat-header.tsx
+++ /dev/null
@@ -1,12 +0,0 @@
-import LogoIcon from '@/assets/images/logo.svg'
-import Image from 'next/image'
-
-export function ChatHeader() {
- return (
-
-
- 欢迎使用新必应
- 由 AI 支持的网页版 Copilot
-
- )
-}
diff --git a/spaces/aphenx/bingo/src/components/ui/dialog.tsx b/spaces/aphenx/bingo/src/components/ui/dialog.tsx
deleted file mode 100644
index 925e77fe7858fb218b5115b4e225174a886e0f02..0000000000000000000000000000000000000000
--- a/spaces/aphenx/bingo/src/components/ui/dialog.tsx
+++ /dev/null
@@ -1,128 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import * as DialogPrimitive from '@radix-ui/react-dialog'
-
-import { cn } from '@/lib/utils'
-import { IconClose } from '@/components/ui/icons'
-
-const Dialog = DialogPrimitive.Root
-
-const DialogTrigger = DialogPrimitive.Trigger
-
-const DialogPortal = ({
- className,
- children,
- ...props
-}: DialogPrimitive.DialogPortalProps) => (
-
-
- {children}
-
-
-)
-DialogPortal.displayName = DialogPrimitive.Portal.displayName
-
-const DialogOverlay = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-DialogOverlay.displayName = DialogPrimitive.Overlay.displayName
-
-const DialogContent = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, children, ...props }, ref) => (
-
-
-
- {children}
-
-
- Close
-
-
-
-))
-DialogContent.displayName = DialogPrimitive.Content.displayName
-
-const DialogHeader = ({
- className,
- ...props
-}: React.HTMLAttributes) => (
-
-)
-DialogHeader.displayName = 'DialogHeader'
-
-const DialogFooter = ({
- className,
- ...props
-}: React.HTMLAttributes) => (
-
-)
-DialogFooter.displayName = 'DialogFooter'
-
-const DialogTitle = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-DialogTitle.displayName = DialogPrimitive.Title.displayName
-
-const DialogDescription = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-DialogDescription.displayName = DialogPrimitive.Description.displayName
-
-export {
- Dialog,
- DialogTrigger,
- DialogContent,
- DialogHeader,
- DialogFooter,
- DialogTitle,
- DialogDescription
-}
diff --git a/spaces/arjunpatel/best-selling-video-games/data_cleaning.py b/spaces/arjunpatel/best-selling-video-games/data_cleaning.py
deleted file mode 100644
index 6b01d69308b6a9d7be2c1509df2be38ba97af609..0000000000000000000000000000000000000000
--- a/spaces/arjunpatel/best-selling-video-games/data_cleaning.py
+++ /dev/null
@@ -1,92 +0,0 @@
-
-import pandas as pd
-import numpy as np
-import re
-
-from nltk.tokenize import word_tokenize, sent_tokenize
-from nltk.stem import PorterStemmer
-import nltk
-nltk.download('punkt')
-
-from textacy.preprocessing.remove import accents, brackets, punctuation
-from textacy.preprocessing.replace import numbers, urls
-from textacy.preprocessing.normalize import whitespace
-
-import os
-
-def clean_page(page):
- # given a page, removes heading, newlines, tabs, etc
- page = re.sub("=+", "", page)
- page = page.replace("\n", "")
- page = page.replace("\t", "")
- page = accents(brackets(page))
- page = urls(page)
-
- return whitespace(page).lower()
-
-def clean_sentences(s):
-
- pattern = r'[^A-Za-z0-9]+'
- page = re.sub(pattern, '', s)
- return s
-
-
-
-ps = PorterStemmer()
-def prepare_document(doc):
- # given a document, preprocesses and tokenizes it for tfidf
-
- # clean the document of misc symbols and headings, lowercase it
- doc = clean_page(doc)
-
- #tokenize by sentence and then by word
- sentences = sent_tokenize(doc)
-
- #remove punctuation
- sentences = [punctuation(s) for s in sentences]
-
-
- # stem every word
- sentences_and_words = [word_tokenize(s) for s in sentences]
-
- prepared_doc = []
-
- for sent in sentences_and_words:
- stemmed_sentences = []
- for word in sent:
- stemmed_sentences.append(ps.stem(word))
- cleaned_sentence = " ".join(stemmed_sentences)
- prepared_doc.append(cleaned_sentence)
- return " ".join(prepared_doc)
-
-
-# small function to calculats cosine similarity of all pairs and store
-def cosine_similarity(v1, v2):
- numerator = np.dot(v1, v2)
- denom = np.sqrt(np.sum(np.square(v1))) * np.sqrt(np.sum(np.square(v2)))
-
- return numerator/denom
-
-
-def cos_dicts(names, vects):
-
- #given a set of vectors, create a dict of dicts for cosine similarity
- # This dict of dict structure allows us to index directly into the pair we want
- # The first key will be our desired game
- # and the value for that key will be a dictionary of partner games
-
- # The inner key will be the second game we wish to seek, and its value will be cosine similarity to our first game
-
- d = {}
- for name, vect in zip(names, vects):
- cos_sim_by_vect = {}
- for n2, v2 in zip(names, vects):
- if n2 != name:
- cos_sim_by_vect[n2] = cosine_similarity(vect, v2)
- d[name] = cos_sim_by_vect
- return d
-
-def retrieve_top_k_similar(n1, similarity_dict, k):
- inner_dict = similarity_dict[n1]
- # sort the dictionary by value, descending, then retrieve top k values
- return sorted(inner_dict.items(), reverse = True, key = lambda x: x[1])[:k]
diff --git a/spaces/arpitneema/ArpitTestBert/app.py b/spaces/arpitneema/ArpitTestBert/app.py
deleted file mode 100644
index 86e8e4eee04fa3374ce21c5a7662c323ac933696..0000000000000000000000000000000000000000
--- a/spaces/arpitneema/ArpitTestBert/app.py
+++ /dev/null
@@ -1,24 +0,0 @@
-from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, AutoModelForQuestionAnswering
-import gradio as gr
-import torch
-
-
-title = "🤖AI ChatBot"
-description = "A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT)"
-examples = [["How are you?"]]
-
-model = AutoModelForQuestionAnswering.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad",return_dict=False)
-tokenizer = AutoTokenizer.from_pretrained("bert-large-uncased-whole-word-masking-finetuned-squad")
-nlp = pipeline("question-answering", model=model, tokenizer=tokenizer)
-
-# tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-large")
-# model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-large")
-
-
-def func(context, question):
- result = nlp(question = question, context=context)
- return result['answer']
-
-app = gr.Interface(fn=func, inputs = ['textbox', 'text'], outputs = 'textbox', title = 'Farm QA Bot', theme = 'dark-grass', description = 'Farm QA Bot')
-
-app.launch(inline=False)
\ No newline at end of file
diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/delightful_tts/energy_adaptor.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/delightful_tts/energy_adaptor.py
deleted file mode 100644
index ea0d1e47214d81a42b934bbaaa4b3ebb9f63bcc6..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/delightful_tts/energy_adaptor.py
+++ /dev/null
@@ -1,82 +0,0 @@
-from typing import Callable, Tuple
-
-import torch
-import torch.nn as nn # pylint: disable=consider-using-from-import
-
-from TTS.tts.layers.delightful_tts.variance_predictor import VariancePredictor
-from TTS.tts.utils.helpers import average_over_durations
-
-
-class EnergyAdaptor(nn.Module): # pylint: disable=abstract-method
- """Variance Adaptor with an added 1D conv layer. Used to
- get energy embeddings.
-
- Args:
- channels_in (int): Number of in channels for conv layers.
- channels_out (int): Number of out channels.
- kernel_size (int): Size the kernel for the conv layers.
- dropout (float): Probability of dropout.
- lrelu_slope (float): Slope for the leaky relu.
- emb_kernel_size (int): Size the kernel for the pitch embedding.
-
- Inputs: inputs, mask
- - **inputs** (batch, time1, dim): Tensor containing input vector
- - **target** (batch, 1, time2): Tensor containing the energy target
- - **dr** (batch, time1): Tensor containing aligner durations vector
- - **mask** (batch, time1): Tensor containing indices to be masked
- Returns:
- - **energy prediction** (batch, 1, time1): Tensor produced by energy predictor
- - **energy embedding** (batch, channels, time1): Tensor produced energy adaptor
- - **average energy target(train only)** (batch, 1, time1): Tensor produced after averaging over durations
-
- """
-
- def __init__(
- self,
- channels_in: int,
- channels_hidden: int,
- channels_out: int,
- kernel_size: int,
- dropout: float,
- lrelu_slope: float,
- emb_kernel_size: int,
- ):
- super().__init__()
- self.energy_predictor = VariancePredictor(
- channels_in=channels_in,
- channels=channels_hidden,
- channels_out=channels_out,
- kernel_size=kernel_size,
- p_dropout=dropout,
- lrelu_slope=lrelu_slope,
- )
- self.energy_emb = nn.Conv1d(
- 1,
- channels_hidden,
- kernel_size=emb_kernel_size,
- padding=int((emb_kernel_size - 1) / 2),
- )
-
- def get_energy_embedding_train(
- self, x: torch.Tensor, target: torch.Tensor, dr: torch.IntTensor, mask: torch.Tensor
- ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
- """
- Shapes:
- x: :math: `[B, T_src, C]`
- target: :math: `[B, 1, T_max2]`
- dr: :math: `[B, T_src]`
- mask: :math: `[B, T_src]`
- """
- energy_pred = self.energy_predictor(x, mask)
- energy_pred.unsqueeze_(1)
- avg_energy_target = average_over_durations(target, dr)
- energy_emb = self.energy_emb(avg_energy_target)
- return energy_pred, avg_energy_target, energy_emb
-
- def get_energy_embedding(self, x: torch.Tensor, mask: torch.Tensor, energy_transform: Callable) -> torch.Tensor:
- energy_pred = self.energy_predictor(x, mask)
- energy_pred.unsqueeze_(1)
- if energy_transform is not None:
- energy_pred = energy_transform(energy_pred, (~mask).sum(dim=(1, 2)), self.pitch_mean, self.pitch_std)
- energy_emb_pred = self.energy_emb(energy_pred)
- return energy_emb_pred, energy_pred
diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/delightful_tts/networks.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/delightful_tts/networks.py
deleted file mode 100644
index 4305022f18cf95565b2da2553740276818fb486c..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/delightful_tts/networks.py
+++ /dev/null
@@ -1,219 +0,0 @@
-import math
-from typing import Tuple
-
-import numpy as np
-import torch
-import torch.nn as nn # pylint: disable=consider-using-from-import
-import torch.nn.functional as F
-
-from TTS.tts.layers.delightful_tts.conv_layers import ConvNorm
-
-
-def initialize_embeddings(shape: Tuple[int]) -> torch.Tensor:
- assert len(shape) == 2, "Can only initialize 2-D embedding matrices ..."
- # Kaiming initialization
- return torch.randn(shape) * np.sqrt(2 / shape[1])
-
-
-def positional_encoding(d_model: int, length: int, device: torch.device) -> torch.Tensor:
- pe = torch.zeros(length, d_model, device=device)
- position = torch.arange(0, length, dtype=torch.float, device=device).unsqueeze(1)
- div_term = torch.exp(torch.arange(0, d_model, 2, device=device).float() * -(math.log(10000.0) / d_model))
- pe[:, 0::2] = torch.sin(position * div_term)
- pe[:, 1::2] = torch.cos(position * div_term)
- pe = pe.unsqueeze(0)
- return pe
-
-
-class BottleneckLayer(nn.Module):
- """
- Bottleneck layer for reducing the dimensionality of a tensor.
-
- Args:
- in_dim: The number of input dimensions.
- reduction_factor: The factor by which to reduce the number of dimensions.
- norm: The normalization method to use. Can be "weightnorm" or "instancenorm".
- non_linearity: The non-linearity to use. Can be "relu" or "leakyrelu".
- kernel_size: The size of the convolutional kernel.
- use_partial_padding: Whether to use partial padding with the convolutional kernel.
-
- Shape:
- - Input: :math:`[N, in_dim]` where `N` is the batch size and `in_dim` is the number of input dimensions.
-
- - Output: :math:`[N, out_dim]` where `out_dim` is the number of output dimensions.
- """
-
- def __init__(
- self,
- in_dim,
- reduction_factor,
- norm="weightnorm",
- non_linearity="relu",
- kernel_size=3,
- use_partial_padding=False, # pylint: disable=unused-argument
- ):
- super(BottleneckLayer, self).__init__() # pylint: disable=super-with-arguments
-
- self.reduction_factor = reduction_factor
- reduced_dim = int(in_dim / reduction_factor)
- self.out_dim = reduced_dim
- if self.reduction_factor > 1:
- fn = ConvNorm(in_dim, reduced_dim, kernel_size=kernel_size, use_weight_norm=(norm == "weightnorm"))
- if norm == "instancenorm":
- fn = nn.Sequential(fn, nn.InstanceNorm1d(reduced_dim, affine=True))
-
- self.projection_fn = fn
- self.non_linearity = nn.ReLU()
- if non_linearity == "leakyrelu":
- self.non_linearity = nn.LeakyReLU()
-
- def forward(self, x):
- if self.reduction_factor > 1:
- x = self.projection_fn(x)
- x = self.non_linearity(x)
- return x
-
-
-class GLUActivation(nn.Module):
- """Class that implements the Gated Linear Unit (GLU) activation function.
-
- The GLU activation function is a variant of the Leaky ReLU activation function,
- where the output of the activation function is gated by an input tensor.
-
- """
-
- def __init__(self, slope: float):
- super().__init__()
- self.lrelu = nn.LeakyReLU(slope)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- out, gate = x.chunk(2, dim=1)
- x = out * self.lrelu(gate)
- return x
-
-
-class StyleEmbedAttention(nn.Module):
- def __init__(self, query_dim: int, key_dim: int, num_units: int, num_heads: int):
- super().__init__()
- self.num_units = num_units
- self.num_heads = num_heads
- self.key_dim = key_dim
-
- self.W_query = nn.Linear(in_features=query_dim, out_features=num_units, bias=False)
- self.W_key = nn.Linear(in_features=key_dim, out_features=num_units, bias=False)
- self.W_value = nn.Linear(in_features=key_dim, out_features=num_units, bias=False)
-
- def forward(self, query: torch.Tensor, key_soft: torch.Tensor) -> torch.Tensor:
- values = self.W_value(key_soft)
- split_size = self.num_units // self.num_heads
- values = torch.stack(torch.split(values, split_size, dim=2), dim=0)
-
- out_soft = scores_soft = None
- querys = self.W_query(query) # [N, T_q, num_units]
- keys = self.W_key(key_soft) # [N, T_k, num_units]
-
- # [h, N, T_q, num_units/h]
- querys = torch.stack(torch.split(querys, split_size, dim=2), dim=0)
- # [h, N, T_k, num_units/h]
- keys = torch.stack(torch.split(keys, split_size, dim=2), dim=0)
- # [h, N, T_k, num_units/h]
-
- # score = softmax(QK^T / (d_k ** 0.5))
- scores_soft = torch.matmul(querys, keys.transpose(2, 3)) # [h, N, T_q, T_k]
- scores_soft = scores_soft / (self.key_dim**0.5)
- scores_soft = F.softmax(scores_soft, dim=3)
-
- # out = score * V
- # [h, N, T_q, num_units/h]
- out_soft = torch.matmul(scores_soft, values)
- out_soft = torch.cat(torch.split(out_soft, 1, dim=0), dim=3).squeeze(0) # [N, T_q, num_units]
-
- return out_soft # , scores_soft
-
-
-class EmbeddingPadded(nn.Module):
- def __init__(self, num_embeddings: int, embedding_dim: int, padding_idx: int):
- super().__init__()
- padding_mult = torch.ones((num_embeddings, 1), dtype=torch.int64)
- padding_mult[padding_idx] = 0
- self.register_buffer("padding_mult", padding_mult)
- self.embeddings = nn.parameter.Parameter(initialize_embeddings((num_embeddings, embedding_dim)))
-
- def forward(self, idx: torch.Tensor) -> torch.Tensor:
- embeddings_zeroed = self.embeddings * self.padding_mult
- x = F.embedding(idx, embeddings_zeroed)
- return x
-
-
-class EmbeddingProjBlock(nn.Module):
- def __init__(self, embedding_dim: int):
- super().__init__()
- self.layers = nn.ModuleList(
- [
- nn.Linear(embedding_dim, embedding_dim),
- nn.LeakyReLU(0.3),
- nn.Linear(embedding_dim, embedding_dim),
- nn.LeakyReLU(0.3),
- ]
- )
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- res = x
- for layer in self.layers:
- x = layer(x)
- x = x + res
- return x
-
-
-class LinearNorm(nn.Module):
- def __init__(self, in_features: int, out_features: int, bias: bool = False):
- super().__init__()
- self.linear = nn.Linear(in_features, out_features, bias)
-
- nn.init.xavier_uniform_(self.linear.weight)
- if bias:
- nn.init.constant_(self.linear.bias, 0.0)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- x = self.linear(x)
- return x
-
-
-class STL(nn.Module):
- """
- A PyTorch module for the Style Token Layer (STL) as described in
- "A Style-Based Generator Architecture for Generative Adversarial Networks"
- (https://arxiv.org/abs/1812.04948)
-
- The STL applies a multi-headed attention mechanism over the learned style tokens,
- using the text input as the query and the style tokens as the keys and values.
- The output of the attention mechanism is used as the text's style embedding.
-
- Args:
- token_num (int): The number of style tokens.
- n_hidden (int): Number of hidden dimensions.
- """
-
- def __init__(self, n_hidden: int, token_num: int):
- super(STL, self).__init__() # pylint: disable=super-with-arguments
-
- num_heads = 1
- E = n_hidden
- self.token_num = token_num
- self.embed = nn.Parameter(torch.FloatTensor(self.token_num, E // num_heads))
- d_q = E // 2
- d_k = E // num_heads
- self.attention = StyleEmbedAttention(query_dim=d_q, key_dim=d_k, num_units=E, num_heads=num_heads)
-
- torch.nn.init.normal_(self.embed, mean=0, std=0.5)
-
- def forward(self, x: torch.Tensor) -> torch.Tensor:
- N = x.size(0)
- query = x.unsqueeze(1) # [N, 1, E//2]
-
- keys_soft = torch.tanh(self.embed).unsqueeze(0).expand(N, -1, -1) # [N, token_num, E // num_heads]
-
- # Weighted sum
- emotion_embed_soft = self.attention(query, keys_soft)
-
- return emotion_embed_soft
diff --git a/spaces/artificialguybr/video-dubbing/Wav2Lip/models/wav2lip.py b/spaces/artificialguybr/video-dubbing/Wav2Lip/models/wav2lip.py
deleted file mode 100644
index ae5d6919169ec497f0f0815184f5db8ba9108fbd..0000000000000000000000000000000000000000
--- a/spaces/artificialguybr/video-dubbing/Wav2Lip/models/wav2lip.py
+++ /dev/null
@@ -1,184 +0,0 @@
-import torch
-from torch import nn
-from torch.nn import functional as F
-import math
-
-from .conv import Conv2dTranspose, Conv2d, nonorm_Conv2d
-
-class Wav2Lip(nn.Module):
- def __init__(self):
- super(Wav2Lip, self).__init__()
-
- self.face_encoder_blocks = nn.ModuleList([
- nn.Sequential(Conv2d(6, 16, kernel_size=7, stride=1, padding=3)), # 96,96
-
- nn.Sequential(Conv2d(16, 32, kernel_size=3, stride=2, padding=1), # 48,48
- Conv2d(32, 32, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(32, 32, kernel_size=3, stride=1, padding=1, residual=True)),
-
- nn.Sequential(Conv2d(32, 64, kernel_size=3, stride=2, padding=1), # 24,24
- Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True)),
-
- nn.Sequential(Conv2d(64, 128, kernel_size=3, stride=2, padding=1), # 12,12
- Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True)),
-
- nn.Sequential(Conv2d(128, 256, kernel_size=3, stride=2, padding=1), # 6,6
- Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True)),
-
- nn.Sequential(Conv2d(256, 512, kernel_size=3, stride=2, padding=1), # 3,3
- Conv2d(512, 512, kernel_size=3, stride=1, padding=1, residual=True),),
-
- nn.Sequential(Conv2d(512, 512, kernel_size=3, stride=1, padding=0), # 1, 1
- Conv2d(512, 512, kernel_size=1, stride=1, padding=0)),])
-
- self.audio_encoder = nn.Sequential(
- Conv2d(1, 32, kernel_size=3, stride=1, padding=1),
- Conv2d(32, 32, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(32, 32, kernel_size=3, stride=1, padding=1, residual=True),
-
- Conv2d(32, 64, kernel_size=3, stride=(3, 1), padding=1),
- Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),
-
- Conv2d(64, 128, kernel_size=3, stride=3, padding=1),
- Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),
-
- Conv2d(128, 256, kernel_size=3, stride=(3, 2), padding=1),
- Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True),
-
- Conv2d(256, 512, kernel_size=3, stride=1, padding=0),
- Conv2d(512, 512, kernel_size=1, stride=1, padding=0),)
-
- self.face_decoder_blocks = nn.ModuleList([
- nn.Sequential(Conv2d(512, 512, kernel_size=1, stride=1, padding=0),),
-
- nn.Sequential(Conv2dTranspose(1024, 512, kernel_size=3, stride=1, padding=0), # 3,3
- Conv2d(512, 512, kernel_size=3, stride=1, padding=1, residual=True),),
-
- nn.Sequential(Conv2dTranspose(1024, 512, kernel_size=3, stride=2, padding=1, output_padding=1),
- Conv2d(512, 512, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(512, 512, kernel_size=3, stride=1, padding=1, residual=True),), # 6, 6
-
- nn.Sequential(Conv2dTranspose(768, 384, kernel_size=3, stride=2, padding=1, output_padding=1),
- Conv2d(384, 384, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(384, 384, kernel_size=3, stride=1, padding=1, residual=True),), # 12, 12
-
- nn.Sequential(Conv2dTranspose(512, 256, kernel_size=3, stride=2, padding=1, output_padding=1),
- Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True),), # 24, 24
-
- nn.Sequential(Conv2dTranspose(320, 128, kernel_size=3, stride=2, padding=1, output_padding=1),
- Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True),), # 48, 48
-
- nn.Sequential(Conv2dTranspose(160, 64, kernel_size=3, stride=2, padding=1, output_padding=1),
- Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),
- Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True),),]) # 96,96
-
- self.output_block = nn.Sequential(Conv2d(80, 32, kernel_size=3, stride=1, padding=1),
- nn.Conv2d(32, 3, kernel_size=1, stride=1, padding=0),
- nn.Sigmoid())
-
- def forward(self, audio_sequences, face_sequences):
- # audio_sequences = (B, T, 1, 80, 16)
- B = audio_sequences.size(0)
-
- input_dim_size = len(face_sequences.size())
- if input_dim_size > 4:
- audio_sequences = torch.cat([audio_sequences[:, i] for i in range(audio_sequences.size(1))], dim=0)
- face_sequences = torch.cat([face_sequences[:, :, i] for i in range(face_sequences.size(2))], dim=0)
-
- audio_embedding = self.audio_encoder(audio_sequences) # B, 512, 1, 1
-
- feats = []
- x = face_sequences
- for f in self.face_encoder_blocks:
- x = f(x)
- feats.append(x)
-
- x = audio_embedding
- for f in self.face_decoder_blocks:
- x = f(x)
- try:
- x = torch.cat((x, feats[-1]), dim=1)
- except Exception as e:
- print(x.size())
- print(feats[-1].size())
- raise e
-
- feats.pop()
-
- x = self.output_block(x)
-
- if input_dim_size > 4:
- x = torch.split(x, B, dim=0) # [(B, C, H, W)]
- outputs = torch.stack(x, dim=2) # (B, C, T, H, W)
-
- else:
- outputs = x
-
- return outputs
-
-class Wav2Lip_disc_qual(nn.Module):
- def __init__(self):
- super(Wav2Lip_disc_qual, self).__init__()
-
- self.face_encoder_blocks = nn.ModuleList([
- nn.Sequential(nonorm_Conv2d(3, 32, kernel_size=7, stride=1, padding=3)), # 48,96
-
- nn.Sequential(nonorm_Conv2d(32, 64, kernel_size=5, stride=(1, 2), padding=2), # 48,48
- nonorm_Conv2d(64, 64, kernel_size=5, stride=1, padding=2)),
-
- nn.Sequential(nonorm_Conv2d(64, 128, kernel_size=5, stride=2, padding=2), # 24,24
- nonorm_Conv2d(128, 128, kernel_size=5, stride=1, padding=2)),
-
- nn.Sequential(nonorm_Conv2d(128, 256, kernel_size=5, stride=2, padding=2), # 12,12
- nonorm_Conv2d(256, 256, kernel_size=5, stride=1, padding=2)),
-
- nn.Sequential(nonorm_Conv2d(256, 512, kernel_size=3, stride=2, padding=1), # 6,6
- nonorm_Conv2d(512, 512, kernel_size=3, stride=1, padding=1)),
-
- nn.Sequential(nonorm_Conv2d(512, 512, kernel_size=3, stride=2, padding=1), # 3,3
- nonorm_Conv2d(512, 512, kernel_size=3, stride=1, padding=1),),
-
- nn.Sequential(nonorm_Conv2d(512, 512, kernel_size=3, stride=1, padding=0), # 1, 1
- nonorm_Conv2d(512, 512, kernel_size=1, stride=1, padding=0)),])
-
- self.binary_pred = nn.Sequential(nn.Conv2d(512, 1, kernel_size=1, stride=1, padding=0), nn.Sigmoid())
- self.label_noise = .0
-
- def get_lower_half(self, face_sequences):
- return face_sequences[:, :, face_sequences.size(2)//2:]
-
- def to_2d(self, face_sequences):
- B = face_sequences.size(0)
- face_sequences = torch.cat([face_sequences[:, :, i] for i in range(face_sequences.size(2))], dim=0)
- return face_sequences
-
- def perceptual_forward(self, false_face_sequences):
- false_face_sequences = self.to_2d(false_face_sequences)
- false_face_sequences = self.get_lower_half(false_face_sequences)
-
- false_feats = false_face_sequences
- for f in self.face_encoder_blocks:
- false_feats = f(false_feats)
-
- false_pred_loss = F.binary_cross_entropy(self.binary_pred(false_feats).view(len(false_feats), -1),
- torch.ones((len(false_feats), 1)).cuda())
-
- return false_pred_loss
-
- def forward(self, face_sequences):
- face_sequences = self.to_2d(face_sequences)
- face_sequences = self.get_lower_half(face_sequences)
-
- x = face_sequences
- for f in self.face_encoder_blocks:
- x = f(x)
-
- return self.binary_pred(x).view(len(x), -1)
diff --git a/spaces/avivdm1/AutoGPT/autogpt/setup.py b/spaces/avivdm1/AutoGPT/autogpt/setup.py
deleted file mode 100644
index bfa68201b62bf67230a61fb1ecb00d1ab0ef0631..0000000000000000000000000000000000000000
--- a/spaces/avivdm1/AutoGPT/autogpt/setup.py
+++ /dev/null
@@ -1,77 +0,0 @@
-"""Set up the AI and its goals"""
-from colorama import Fore, Style
-
-from autogpt import utils
-from autogpt.config.ai_config import AIConfig
-from autogpt.logs import logger
-
-
-def prompt_user() -> AIConfig:
- """Prompt the user for input
-
- Returns:
- AIConfig: The AIConfig object containing the user's input
- """
- ai_name = ""
- # Construct the prompt
- logger.typewriter_log(
- "Welcome to Auto-GPT! ",
- Fore.GREEN,
- "run with '--help' for more information.",
- speak_text=True,
- )
-
- logger.typewriter_log(
- "Create an AI-Assistant:",
- Fore.GREEN,
- "Enter the name of your AI and its role below. Entering nothing will load"
- " defaults.",
- speak_text=True,
- )
-
- # Get AI Name from User
- logger.typewriter_log(
- "Name your AI: ", Fore.GREEN, "For example, 'Entrepreneur-GPT'"
- )
- ai_name = utils.clean_input("AI Name: ")
- if ai_name == "":
- ai_name = "Entrepreneur-GPT"
-
- logger.typewriter_log(
- f"{ai_name} here!", Fore.LIGHTBLUE_EX, "I am at your service.", speak_text=True
- )
-
- # Get AI Role from User
- logger.typewriter_log(
- "Describe your AI's role: ",
- Fore.GREEN,
- "For example, 'an AI designed to autonomously develop and run businesses with"
- " the sole goal of increasing your net worth.'",
- )
- ai_role = utils.clean_input(f"{ai_name} is: ")
- if ai_role == "":
- ai_role = "an AI designed to autonomously develop and run businesses with the"
- " sole goal of increasing your net worth."
-
- # Enter up to 5 goals for the AI
- logger.typewriter_log(
- "Enter up to 5 goals for your AI: ",
- Fore.GREEN,
- "For example: \nIncrease net worth, Grow Twitter Account, Develop and manage"
- " multiple businesses autonomously'",
- )
- print("Enter nothing to load defaults, enter nothing when finished.", flush=True)
- ai_goals = []
- for i in range(5):
- ai_goal = utils.clean_input(f"{Fore.LIGHTBLUE_EX}Goal{Style.RESET_ALL} {i+1}: ")
- if ai_goal == "":
- break
- ai_goals.append(ai_goal)
- if not ai_goals:
- ai_goals = [
- "Increase net worth",
- "Grow Twitter Account",
- "Develop and manage multiple businesses autonomously",
- ]
-
- return AIConfig(ai_name, ai_role, ai_goals)
diff --git a/spaces/awacke1/Media-Pipe-Facial-Mesh-Matching-3D/app.py b/spaces/awacke1/Media-Pipe-Facial-Mesh-Matching-3D/app.py
deleted file mode 100644
index bb6a4a4e1fde95fc933b67df8fc7c260af2d7ebb..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Media-Pipe-Facial-Mesh-Matching-3D/app.py
+++ /dev/null
@@ -1,119 +0,0 @@
-
-from __future__ import annotations
-
-import os
-import pathlib
-import shlex
-import subprocess
-import tarfile
-
-if os.environ.get('SYSTEM') == 'spaces':
- subprocess.call(shlex.split('pip uninstall -y opencv-python'))
- subprocess.call(shlex.split('pip uninstall -y opencv-python-headless'))
- subprocess.call(
- shlex.split('pip install opencv-python-headless==4.5.5.64'))
-
-import gradio as gr
-import huggingface_hub
-import mediapipe as mp
-import numpy as np
-
-mp_drawing = mp.solutions.drawing_utils
-mp_drawing_styles = mp.solutions.drawing_styles
-mp_face_mesh = mp.solutions.face_mesh
-
-TITLE = 'MediaPipe Face Mesh'
-DESCRIPTION = 'https://google.github.io/mediapipe/'
-
-HF_TOKEN = os.getenv('HF_TOKEN')
-
-
-def load_sample_images() -> list[pathlib.Path]:
- image_dir = pathlib.Path('images')
- if not image_dir.exists():
- image_dir.mkdir()
- dataset_repo = 'hysts/input-images'
- filenames = ['001.tar', '005.tar']
- for name in filenames:
- path = huggingface_hub.hf_hub_download(dataset_repo,
- name,
- repo_type='dataset',
- use_auth_token=HF_TOKEN)
- with tarfile.open(path) as f:
- f.extractall(image_dir.as_posix())
- return sorted(image_dir.rglob('*.jpg'))
-
-
-def run(
- image: np.ndarray,
- max_num_faces: int,
- min_detection_confidence: float,
- show_tesselation: bool,
- show_contours: bool,
- show_irises: bool,
-) -> np.ndarray:
- with mp_face_mesh.FaceMesh(
- static_image_mode=True,
- max_num_faces=max_num_faces,
- refine_landmarks=True,
- min_detection_confidence=min_detection_confidence) as face_mesh:
- results = face_mesh.process(image)
-
- res = image[:, :, ::-1].copy()
- if results.multi_face_landmarks is not None:
- for face_landmarks in results.multi_face_landmarks:
- if show_tesselation:
- mp_drawing.draw_landmarks(
- image=res,
- landmark_list=face_landmarks,
- connections=mp_face_mesh.FACEMESH_TESSELATION,
- landmark_drawing_spec=None,
- connection_drawing_spec=mp_drawing_styles.
- get_default_face_mesh_tesselation_style())
- if show_contours:
- mp_drawing.draw_landmarks(
- image=res,
- landmark_list=face_landmarks,
- connections=mp_face_mesh.FACEMESH_CONTOURS,
- landmark_drawing_spec=None,
- connection_drawing_spec=mp_drawing_styles.
- get_default_face_mesh_contours_style())
- if show_irises:
- mp_drawing.draw_landmarks(
- image=res,
- landmark_list=face_landmarks,
- connections=mp_face_mesh.FACEMESH_IRISES,
- landmark_drawing_spec=None,
- connection_drawing_spec=mp_drawing_styles.
- get_default_face_mesh_iris_connections_style())
-
- return res[:, :, ::-1]
-
-
-image_paths = load_sample_images()
-examples = [[path.as_posix(), 5, 0.5, True, True, True]
- for path in image_paths]
-
-gr.Interface(
- fn=run,
- inputs=[
- gr.Image(label='Input', type='numpy'),
- gr.Slider(label='Max Number of Faces',
- minimum=0,
- maximum=10,
- step=1,
- value=5),
- gr.Slider(label='Minimum Detection Confidence',
- minimum=0,
- maximum=1,
- step=0.05,
- value=0.5),
- gr.Checkbox(label='Show Tesselation', value=True),
- gr.Checkbox(label='Show Contours', value=True),
- gr.Checkbox(label='Show Irises', value=True),
- ],
- outputs=gr.Image(label='Output', type='numpy'),
- examples=examples,
- title=TITLE,
- description=DESCRIPTION,
-).launch(show_api=False)
\ No newline at end of file
diff --git a/spaces/awacke1/RLHF.Reinforce.Learn.With.Human.Feedback/app.py b/spaces/awacke1/RLHF.Reinforce.Learn.With.Human.Feedback/app.py
deleted file mode 100644
index 0f1b348df59042cc78f1e737452803efc48fb253..0000000000000000000000000000000000000000
--- a/spaces/awacke1/RLHF.Reinforce.Learn.With.Human.Feedback/app.py
+++ /dev/null
@@ -1,58 +0,0 @@
-import streamlit as st
-import pandas as pd
-from py_thesaurus import Thesaurus
-import random
-import os.path
-
-def generate_sentence():
- words = ["apple", "banana", "grape", "orange", "watermelon", "pineapple", "cherry", "strawberry", "blueberry", "mango"]
- random_words = random.sample(words, 3)
- question = f"What did the {random_words[0]} say to the {random_words[1]}?"
- answer = f"The {random_words[0]} said, 'Let's hang out with the {random_words[2]}!'"
- context = f"In the context of a fruit gathering, the {random_words[0]}, {random_words[1]}, and {random_words[2]} were having fun."
- return f"{question} {answer} {context}"
-
-def replace_with_synonym(sentence):
- words = sentence.split()
- index = random.randint(0, len(words) - 1)
- word = words[index]
- synonyms = Thesaurus(word).get_synonym()
- if synonyms:
- replacement = random.choice(synonyms)
- words[index] = replacement
- return ' '.join(words)
-
-def load_or_create_scoreboard(filename):
- if os.path.isfile(filename):
- return pd.read_csv(filename)
- else:
- scoreboard = pd.DataFrame({'Upvotes': [0], 'Downvotes': [0]})
- scoreboard.to_csv(filename, index=False)
- return scoreboard
-
-def update_scoreboard(scoreboard, thumbs_up, thumbs_down):
- if thumbs_up:
- scoreboard.loc[0, 'Upvotes'] += 1
- elif thumbs_down:
- scoreboard.loc[0, 'Downvotes'] += 1
- return scoreboard
-
-def main():
- filename = 'output.csv'
- scoreboard = load_or_create_scoreboard(filename)
- st.title('Joke Parts Voting Game')
- thumbs_up = st.button('👍')
- thumbs_down = st.button('👎')
- scoreboard = update_scoreboard(scoreboard, thumbs_up, thumbs_down)
- scoreboard.to_csv(filename, index=False)
- col1, col2 = st.columns(2)
- with col1:
- st.write(f'👍 {scoreboard.loc[0, "Upvotes"]}')
- with col2:
- st.write(f'👎 {scoreboard.loc[0, "Downvotes"]}')
- original_text = generate_sentence()
- modified_text = replace_with_synonym(original_text)
- st.write(f'🤣 {modified_text}')
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/awacke1/RealTimeLiveSentimentAnalyzer/README.md b/spaces/awacke1/RealTimeLiveSentimentAnalyzer/README.md
deleted file mode 100644
index a575ddca8cc7277450df7db509faafd32364797d..0000000000000000000000000000000000000000
--- a/spaces/awacke1/RealTimeLiveSentimentAnalyzer/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: RealTimeLiveSentimentAnalyzer
-emoji: 🐠
-colorFrom: pink
-colorTo: green
-sdk: streamlit
-sdk_version: 1.10.0
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/awacke1/SimPhysics/style.css b/spaces/awacke1/SimPhysics/style.css
deleted file mode 100644
index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000
--- a/spaces/awacke1/SimPhysics/style.css
+++ /dev/null
@@ -1,28 +0,0 @@
-body {
- padding: 2rem;
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
-}
-
-h1 {
- font-size: 16px;
- margin-top: 0;
-}
-
-p {
- color: rgb(107, 114, 128);
- font-size: 15px;
- margin-bottom: 10px;
- margin-top: 5px;
-}
-
-.card {
- max-width: 620px;
- margin: 0 auto;
- padding: 16px;
- border: 1px solid lightgray;
- border-radius: 16px;
-}
-
-.card p:last-child {
- margin-bottom: 0;
-}
diff --git a/spaces/awacke1/Streamlit.Funny.Feedback.Upvote.Downvote/README.md b/spaces/awacke1/Streamlit.Funny.Feedback.Upvote.Downvote/README.md
deleted file mode 100644
index 53d69e495d9d351edc3304787a59685d13c2d943..0000000000000000000000000000000000000000
--- a/spaces/awacke1/Streamlit.Funny.Feedback.Upvote.Downvote/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Streamlit.Funny.Feedback.Upvote.Downvote
-emoji: 🚀
-colorFrom: purple
-colorTo: red
-sdk: streamlit
-sdk_version: 1.17.0
-app_file: app.py
-pinned: false
-license: mit
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/badayvedat/LLaVA/llava/eval/eval_gpt_review_visual.py b/spaces/badayvedat/LLaVA/llava/eval/eval_gpt_review_visual.py
deleted file mode 100644
index d6e407a400a67020d801e6c27a3c32a2ee38f30c..0000000000000000000000000000000000000000
--- a/spaces/badayvedat/LLaVA/llava/eval/eval_gpt_review_visual.py
+++ /dev/null
@@ -1,118 +0,0 @@
-import argparse
-import json
-import os
-
-import openai
-import time
-
-NUM_SECONDS_TO_SLEEP = 0.5
-
-
-def get_eval(content: str, max_tokens: int):
- while True:
- try:
- response = openai.ChatCompletion.create(
- model='gpt-4-0314',
- messages=[{
- 'role': 'system',
- 'content': 'You are a helpful and precise assistant for checking the quality of the answer.'
- }, {
- 'role': 'user',
- 'content': content,
- }],
- temperature=0.2, # TODO: figure out which temperature is best for evaluation
- max_tokens=max_tokens,
- )
- break
- except openai.error.RateLimitError:
- pass
- except Exception as e:
- print(e)
- time.sleep(NUM_SECONDS_TO_SLEEP)
-
- return response['choices'][0]['message']['content']
-
-
-def parse_score(review):
- try:
- score_pair = review.split('\n')[0]
- score_pair = score_pair.replace(',', ' ')
- sp = score_pair.split(' ')
- if len(sp) == 2:
- return [float(sp[0]), float(sp[1])]
- else:
- print('error', review)
- return [-1, -1]
- except Exception as e:
- print(e)
- print('error', review)
- return [-1, -1]
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser(description='ChatGPT-based QA evaluation.')
- parser.add_argument('-q', '--question')
- parser.add_argument('-c', '--context')
- parser.add_argument('-a', '--answer-list', nargs='+', default=[])
- parser.add_argument('-r', '--rule')
- parser.add_argument('-o', '--output')
- parser.add_argument('--max-tokens', type=int, default=1024, help='maximum number of tokens produced in the output')
- args = parser.parse_args()
-
- f_q = open(os.path.expanduser(args.question))
- f_ans1 = open(os.path.expanduser(args.answer_list[0]))
- f_ans2 = open(os.path.expanduser(args.answer_list[1]))
- rule_dict = json.load(open(os.path.expanduser(args.rule), 'r'))
-
- if os.path.isfile(os.path.expanduser(args.output)):
- cur_reviews = [json.loads(line) for line in open(os.path.expanduser(args.output))]
- else:
- cur_reviews = []
-
- review_file = open(f'{args.output}', 'a')
-
- context_list = [json.loads(line) for line in open(os.path.expanduser(args.context))]
- image_to_context = {context['image']: context for context in context_list}
-
- handles = []
- idx = 0
- for ques_js, ans1_js, ans2_js in zip(f_q, f_ans1, f_ans2):
- ques = json.loads(ques_js)
- ans1 = json.loads(ans1_js)
- ans2 = json.loads(ans2_js)
-
- inst = image_to_context[ques['image']]
- cap_str = '\n'.join(inst['captions'])
- box_str = '\n'.join([f'{instance["category"]}: {instance["bbox"]}' for instance in inst['instances']])
-
- category = json.loads(ques_js)['category']
- if category in rule_dict:
- rule = rule_dict[category]
- else:
- assert False, f"Visual QA category not found in rule file: {category}."
- prompt = rule['prompt']
- role = rule['role']
- content = (f'[Context]\n{cap_str}\n\n{box_str}\n\n'
- f'[Question]\n{ques["text"]}\n\n'
- f'[{role} 1]\n{ans1["text"]}\n\n[End of {role} 1]\n\n'
- f'[{role} 2]\n{ans2["text"]}\n\n[End of {role} 2]\n\n'
- f'[System]\n{prompt}\n\n')
- cur_js = {
- 'id': idx+1,
- 'question_id': ques['question_id'],
- 'answer1_id': ans1.get('answer_id', ans1['question_id']),
- 'answer2_id': ans2.get('answer_id', ans2['answer_id']),
- 'category': category
- }
- if idx >= len(cur_reviews):
- review = get_eval(content, args.max_tokens)
- scores = parse_score(review)
- cur_js['content'] = review
- cur_js['tuple'] = scores
- review_file.write(json.dumps(cur_js) + '\n')
- review_file.flush()
- else:
- print(f'Skipping {idx} as we already have it.')
- idx += 1
- print(idx)
- review_file.close()
diff --git a/spaces/balaramas/s2t_translator/README.md b/spaces/balaramas/s2t_translator/README.md
deleted file mode 100644
index 1aac9db0aa16a2c8ddd6a243068e3f4bfae332dc..0000000000000000000000000000000000000000
--- a/spaces/balaramas/s2t_translator/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: S2t Translator
-emoji: 📉
-colorFrom: green
-colorTo: gray
-sdk: gradio
-sdk_version: 3.35.2
-app_file: app.py
-pinned: false
-license: other
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/banana-projects/web3d/src/index.ts b/spaces/banana-projects/web3d/src/index.ts
deleted file mode 100644
index 899451dd248575f491a1ee6902d4cb9531fd99bc..0000000000000000000000000000000000000000
--- a/spaces/banana-projects/web3d/src/index.ts
+++ /dev/null
@@ -1,306 +0,0 @@
-import * as THREE from 'three';
-import * as TWEEN from '@tweenjs/tween.js';
-
-const scene = new THREE.Scene();
-scene.background = new THREE.Color(
- // 0xcccccc
- 'white'
-);
-const clock = new THREE.Clock();
-const camera = new THREE.PerspectiveCamera(45, window.innerWidth / window.innerHeight, 0.1, 2000);
-camera.position.set(0, 30, 50);
-camera.lookAt(0, 3, 0);
-const controls = new (THREE).OrbitControls(camera);
-const ambientLight = new THREE.AmbientLight(0xffffff, 1);
-scene.add(ambientLight);
-const renderer = new THREE.WebGLRenderer({ antialias: true });
-renderer.setPixelRatio(window.devicePixelRatio);
-renderer.setSize(window.innerWidth, window.innerHeight);
-document.body.appendChild(renderer.domElement);
-const stats = new Stats();
-document.body.appendChild(stats.dom);
-/// Anim mixer
-const mixers: THREE.AnimationMixer[] = [];
-
-class Assets {
- private static loadEggMtl(): Promise {
- return new Promise((resolve, reject) => {
- const loader: THREE.AnyLoader = new (THREE).MTLLoader();
- loader.load(
- `models/Egg_from_Poly_uovo/Egg from Poly uovo.mtl`,
- (materials) => {
- materials.preload();
- resolve(materials);
- },
- (xhr) => {},
- reject
- );
- });
- }
- private static loadEggObj(materials: THREE.Material[]): Promise {
- return new Promise((resolve, reject) => {
- const loader: THREE.AnyLoader = new (THREE).OBJLoader();
- (loader).setMaterials(materials);
- loader.load(
- `models/Egg_from_Poly_uovo/Egg from Poly uovo.obj`,
- (object: THREE.Object3D) => {
- resolve(object);
- },
- (xhr) => {
- // c.log(`${ xhr.loaded / xhr.total * 100 }% loaded`);
- },
- (error) => {
- c.error(error);
- reject(error);
- }
- );
- });
- }
- static async loadEgg(): Promise {
- const materialCreator = await this.loadEggMtl();
- return this.loadEggObj(materialCreator);
- }
- static loadEggGltf(): Promise {
- return new Promise((resolve, reject) => {
- const loader: THREE.AnyLoader = new (THREE).GLTFLoader();
- loader.load(
- `models/Egg_gltf/Egg from Poly uovo copy.gltf`,
- (gltf) => {
- c.log(gltf);
- resolve(gltf.scene);
- }
- );
- });
- }
- static loadDogDae(): Promise<{
- animations: THREE.AnimationClip[];
- scene: THREE.Group;
- }> {
- /// In Dae/Collada: did not manage to get
- /// either the anims or the texture.
- return new Promise((resolve, reject) => {
- const loader: THREE.AnyLoader = new (THREE).ColladaLoader();
- loader.load(
- `models/dog/pup_lohound.dae`,
- (collada) => {
- resolve(collada);
- }
- );
- });
- }
- static loadDogFbx(): Promise {
- return new Promise((resolve, reject) => {
- const loader: THREE.AnyLoader = new (THREE).FBXLoader();
- loader.load(
- `models/dog_fbx/puppy-snapchat.fbx`,
- (fbx) => {
- resolve(fbx);
- }
- );
- });
- }
- static loadBoloss(): Promise<{
- animations: THREE.AnimationClip[];
- scene: THREE.Group;
- }> {
- return new Promise((resolve, reject) => {
- const loader: THREE.AnyLoader = new (THREE).ColladaLoader();
- loader.load(
- `models/boloss/Boloss-3d v10.dae`,
- (collada) => {
- resolve(collada);
- }
- );
- });
- }
-}
-class TUtils {
- static boundingBox(o: THREE.Object3D): THREE.Box3 {
- const bbox = new THREE.Box3().setFromObject(o);
- return bbox;
- }
- static flushYZero(o: THREE.Object3D) {
- o.position.y = -(this.boundingBox(o)).min.y;
- }
- static perform(tween: TWEEN.Tween): Promise {
- return new Promise(resolve => {
- tween.onComplete(resolve).start();
- });
- }
-}
-(async () => {
- /**
- * scene construction
- */
- const gridHelper = new THREE.GridHelper(100, 100);
- scene.add(gridHelper);
- const axesHelper = new THREE.AxesHelper(50);
- scene.add(axesHelper);
-
-
- {
- const egg = await Assets.loadEgg();
- c.log(egg);
- egg.scale.setScalar(.2);
- egg.rotateX(-Math.PI / 2);
- egg.position.x = -18;
- TUtils.flushYZero(egg);
- const box = new THREE.BoxHelper(egg);
- scene.add(box);
- scene.add(egg);
- ///// Manually set the material, for fun.
- // const eggFace = egg.getObjectByName("CallKit-IconMask") as THREE.Mesh;
- // c.log(eggFace.material);
- // ((eggFace.material)).color.set(0x000000);
- }
- {
- const egg = await Assets.loadEggGltf();
- c.log(egg);
- egg.scale.setScalar(100);
- egg.position.x = -28;
- TUtils.flushYZero(egg);
- egg.remove(egg.getObjectByName('Camera')!);
- scene.add(egg);
- // c.log(Utils.boundingBox(egg));
- const box = new THREE.BoxHelper(egg, new THREE.Color('red'));
- scene.add(box);
- }
- {
- ////// dog_fbx
- const dog = await Assets.loadDogFbx();
- // c.log((dog).animations);
- const mixer = new THREE.AnimationMixer(dog);
- const clip: THREE.AnimationClip = (dog).animations.find(clip => clip.name === "lohound|lohoundAction");
- /// ^^ this is the main parent animation! Do not play all children.
- c.log(clip);
- mixer.clipAction(clip).play();
- mixers.push(mixer);
- const container = new THREE.Group();
- container.add(dog);
- container.scale.setScalar(0.007); /// <- scale a container, not the dog itself or it'll fuck the anims.
- container.position.x = -6;
- scene.add(container);
- const box = new THREE.BoxHelper(container, new THREE.Color('green'));
- scene.add(box);
- }
- {
- const boloss = (await Assets.loadBoloss()).scene;
- c.log(boloss);
- boloss.position.x = 16;
- TUtils.flushYZero(boloss);
- scene.add(boloss);
- const box = new THREE.BoxHelper(boloss, new THREE.Color('blue'));
- scene.add(box);
- /// Anims like in AudioBoloss
- const rootModel = boloss.getObjectByName(`SketchUp`)!;
- const pupilL = boloss.getObjectByName(`Pupil-left`)!;
- const pupilR = boloss.getObjectByName(`Pupil-right`)!;
- const pupils = new THREE.Group();
- pupils.add(pupilL, pupilR);
- rootModel.add(pupils);
- (async () => {
- while (true) {
- const translatePupil = new TWEEN.Tween(pupils.position)
- .to({ x: "-1", y: "-1" }, 200)
- .easing(TWEEN.Easing.Quadratic.Out)
- ;
- const translatePupilRev = new TWEEN.Tween(pupils.position)
- .to({ x: "+1", y: "+1" }, 200)
- .easing(TWEEN.Easing.Quadratic.Out)
- ;
- await TUtils.perform(translatePupil);
- await Utils.wait(4, 1);
- await TUtils.perform(translatePupilRev);
- await Utils.wait(8, 3);
- }
- })();
- const eyebrowL = boloss.getObjectByName(`Eyebrow-left`)!;
- const eyebrowR = boloss.getObjectByName(`Eyebrow-right`)!;
- const eyebrows = new THREE.Group();
- eyebrows.add(eyebrowL, eyebrowR);
- rootModel.add(eyebrows);
- (async () => {
- while (true) {
- const scaleEyebrow = new TWEEN.Tween(eyebrows.scale)
- .to({ x: 1.08, y: 1.08, z: 1.08 }, 100)
- .easing(TWEEN.Easing.Quadratic.InOut)
- ;
- const scaleEyebrowRev = new TWEEN.Tween(eyebrows.scale)
- .to({ x: 1, y: 1, z: 1 }, 100)
- .easing(TWEEN.Easing.Quadratic.InOut)
- ;
- await Utils.wait(6, 6);
- await TUtils.perform(scaleEyebrow);
- await TUtils.perform(scaleEyebrowRev);
- await Utils.wait(0.14);
- await TUtils.perform(scaleEyebrow);
- await TUtils.perform(scaleEyebrowRev);
- }
- })();
- (async () => {
- while (true) {
- const angle = Utils.randomFloat(-0.2, 0.3);
- const dummyL = new THREE.Object3D();
- dummyL.rotateOnAxis(new THREE.Vector3(0, 1, 0.8), angle);
- const dummyR = new THREE.Object3D();
- dummyR.rotateOnAxis(new THREE.Vector3(0, -1, -0.8), angle);
- /// ^^ exact same result as keeping the same vector and negating the angle.
- const rotateBrowL = new TWEEN.Tween(eyebrowL.rotation)
- .to({
- x: dummyL.rotation.x,
- y: dummyL.rotation.y,
- z: dummyL.rotation.z,
- }, 300)
- ;
- const rotateBrowR = new TWEEN.Tween(eyebrowR.rotation)
- .to({
- x: dummyR.rotation.x,
- y: dummyR.rotation.y,
- z: dummyR.rotation.z,
- }, 300)
- ;
- await Promise.all([
- TUtils.perform(rotateBrowL),
- TUtils.perform(rotateBrowR),
- ]);
- await Utils.wait(1, 1);
- await Promise.all([
- TUtils.perform(
- new TWEEN.Tween(eyebrowL.rotation).to({ x: 0, y: 0, z: 0 }, 300)
- ),
- TUtils.perform(
- new TWEEN.Tween(eyebrowR.rotation).to({ x: 0, y: 0, z: 0 }, 300)
- ),
- ]);
- await Utils.wait(1, 1);
- /// ^^ not the exact same behavior as in AudioBoloss (all waits are actually randoms there.)
- }
- })();
- }
-
-})();
-
-/**
- * MAIN()
- */
-window.addEventListener('resize', onWindowResize, false);
-function onWindowResize() {
- camera.aspect = window.innerWidth / window.innerHeight;
- camera.updateProjectionMatrix();
- renderer.setSize(window.innerWidth, window.innerHeight);
-}
-function render() {
- const delta = clock.getDelta();
- for (const mixer of mixers) {
- mixer.update(delta);
- }
- renderer.render(scene, camera);
-}
-function animate() {
- requestAnimationFrame(animate);
- TWEEN.update();
- render();
- stats.update();
-}
-animate();
-
diff --git a/spaces/bhkkhjgkk/Voice/app.py b/spaces/bhkkhjgkk/Voice/app.py
deleted file mode 100644
index 462e7a42271c8702e25fc26a963623542d19f581..0000000000000000000000000000000000000000
--- a/spaces/bhkkhjgkk/Voice/app.py
+++ /dev/null
@@ -1,166 +0,0 @@
-from turtle import title
-import gradio as gr
-
-import git
-import os
-# os.system('pip install git')
-os.system('git clone https://github.com/Edresson/Coqui-TTS -b multilingual-torchaudio-SE TTS')
-os.system('pip install -q -e TTS/')
-os.system('pip install -q torchaudio==0.9.0')
-
-import sys
-TTS_PATH = "TTS/"
-
-# add libraries into environment
-sys.path.append(TTS_PATH) # set this if TTS is not installed globally
-
-import os
-import string
-import time
-import argparse
-import json
-
-import numpy as np
-import IPython
-from IPython.display import Audio
-
-
-import torch
-
-from TTS.tts.utils.synthesis import synthesis
-from TTS.tts.utils.text.symbols import make_symbols, phonemes, symbols
-try:
- from TTS.utils.audio import AudioProcessor
-except:
- from TTS.utils.audio import AudioProcessor
-
-
-from TTS.tts.models import setup_model
-from TTS.config import load_config
-from TTS.tts.models.vits import *
-
-OUT_PATH = 'out/'
-
-# create output path
-os.makedirs(OUT_PATH, exist_ok=True)
-
-# model vars
-MODEL_PATH = '/home/user/app/best_model_latest.pth.tar'
-CONFIG_PATH = '/home/user/app/config.json'
-TTS_LANGUAGES = "/home/user/app/language_ids.json"
-TTS_SPEAKERS = "/home/user/app/speakers.json"
-USE_CUDA = torch.cuda.is_available()
-
-# load the config
-C = load_config(CONFIG_PATH)
-
-
-# load the audio processor
-ap = AudioProcessor(**C.audio)
-
-speaker_embedding = None
-
-C.model_args['d_vector_file'] = TTS_SPEAKERS
-C.model_args['use_speaker_encoder_as_loss'] = False
-
-model = setup_model(C)
-model.language_manager.set_language_ids_from_file(TTS_LANGUAGES)
-# print(model.language_manager.num_languages, model.embedded_language_dim)
-# print(model.emb_l)
-cp = torch.load(MODEL_PATH, map_location=torch.device('cpu'))
-# remove speaker encoder
-model_weights = cp['model'].copy()
-for key in list(model_weights.keys()):
- if "speaker_encoder" in key:
- del model_weights[key]
-
-model.load_state_dict(model_weights)
-
-
-model.eval()
-
-if USE_CUDA:
- model = model.cuda()
-
-# synthesize voice
-use_griffin_lim = False
-
-os.system('pip install -q pydub ffmpeg-normalize')
-
-CONFIG_SE_PATH = "config_se.json"
-CHECKPOINT_SE_PATH = "SE_checkpoint.pth.tar"
-
-from TTS.tts.utils.speakers import SpeakerManager
-from pydub import AudioSegment
-import librosa
-
-SE_speaker_manager = SpeakerManager(encoder_model_path=CHECKPOINT_SE_PATH, encoder_config_path=CONFIG_SE_PATH, use_cuda=USE_CUDA)
-
-def compute_spec(ref_file):
- y, sr = librosa.load(ref_file, sr=ap.sample_rate)
- spec = ap.spectrogram(y)
- spec = torch.FloatTensor(spec).unsqueeze(0)
- return spec
-
-
-
-def greet(Text,Voicetoclone,VoiceMicrophone):
- text= "%s" % (Text)
- if Voicetoclone is not None:
- reference_files= "%s" % (Voicetoclone)
- print("path url")
- print(Voicetoclone)
- sample= str(Voicetoclone)
- else:
- reference_files= "%s" % (VoiceMicrophone)
- print("path url")
- print(VoiceMicrophone)
- sample= str(VoiceMicrophone)
- size= len(reference_files)*sys.getsizeof(reference_files)
- size2= size / 1000000
- if (size2 > 0.012) or len(text)>2000:
- message="File is greater than 30mb or Text inserted is longer than 2000 characters. Please re-try with smaller sizes."
- print(message)
- raise SystemExit("File is greater than 30mb. Please re-try or Text inserted is longer than 2000 characters. Please re-try with smaller sizes.")
- else:
- os.system('ffmpeg-normalize $sample -nt rms -t=-27 -o $sample -ar 16000 -f')
- reference_emb = SE_speaker_manager.compute_d_vector_from_clip(reference_files)
- model.length_scale = 1 # scaler for the duration predictor. The larger it is, the slower the speech.
- model.inference_noise_scale = 0.3 # defines the noise variance applied to the random z vector at inference.
- model.inference_noise_scale_dp = 0.3 # defines the noise variance applied to the duration predictor z vector at inference.
- text = text
- model.language_manager.language_id_mapping
- language_id = 0
-
- print(" > text: {}".format(text))
- wav, alignment, _, _ = synthesis(
- model,
- text,
- C,
- "cuda" in str(next(model.parameters()).device),
- ap,
- speaker_id=None,
- d_vector=reference_emb,
- style_wav=None,
- language_id=language_id,
- enable_eos_bos_chars=C.enable_eos_bos_chars,
- use_griffin_lim=True,
- do_trim_silence=False,
- ).values()
- print("Generated Audio")
- IPython.display.display(Audio(wav, rate=ap.sample_rate))
- #file_name = text.replace(" ", "_")
- #file_name = file_name.translate(str.maketrans('', '', string.punctuation.replace('_', ''))) + '.wav'
- file_name="Audio.wav"
- out_path = os.path.join(OUT_PATH, file_name)
- print(" > Saving output to {}".format(out_path))
- ap.save_wav(wav, out_path)
- return out_path
-
-demo = gr.Interface(
- fn=greet,
- inputs=[gr.inputs.Textbox(label='What would you like the voice to say? (max. 2000 characters per request)'),gr.Audio(type="filepath",source="upload",label='Please upload a voice to clone (max. 30mb)'),gr.Audio(source="microphone", type="filepath", streaming=True)],
- outputs="audio",
- title="Cloning Interface"
- )
-demo.launch()
\ No newline at end of file
diff --git a/spaces/bigjoker/stable-diffusion-webui/javascript/ui.js b/spaces/bigjoker/stable-diffusion-webui/javascript/ui.js
deleted file mode 100644
index b7a8268a8fcdf9821cb3af31efea9e0283da1bfe..0000000000000000000000000000000000000000
--- a/spaces/bigjoker/stable-diffusion-webui/javascript/ui.js
+++ /dev/null
@@ -1,338 +0,0 @@
-// various functions for interaction with ui.py not large enough to warrant putting them in separate files
-
-function set_theme(theme){
- gradioURL = window.location.href
- if (!gradioURL.includes('?__theme=')) {
- window.location.replace(gradioURL + '?__theme=' + theme);
- }
-}
-
-function selected_gallery_index(){
- var buttons = gradioApp().querySelectorAll('[style="display: block;"].tabitem div[id$=_gallery] .gallery-item')
- var button = gradioApp().querySelector('[style="display: block;"].tabitem div[id$=_gallery] .gallery-item.\\!ring-2')
-
- var result = -1
- buttons.forEach(function(v, i){ if(v==button) { result = i } })
-
- return result
-}
-
-function extract_image_from_gallery(gallery){
- if(gallery.length == 1){
- return [gallery[0]]
- }
-
- index = selected_gallery_index()
-
- if (index < 0 || index >= gallery.length){
- return [null]
- }
-
- return [gallery[index]];
-}
-
-function args_to_array(args){
- res = []
- for(var i=0;i label > textarea");
-
- if(counter.parentElement == prompt.parentElement){
- return
- }
-
- prompt.parentElement.insertBefore(counter, prompt)
- counter.classList.add("token-counter")
- prompt.parentElement.style.position = "relative"
-
- promptTokecountUpdateFuncs[id] = function(){ update_token_counter(id_button); }
- textarea.addEventListener("input", promptTokecountUpdateFuncs[id]);
- }
-
- registerTextarea('txt2img_prompt', 'txt2img_token_counter', 'txt2img_token_button')
- registerTextarea('txt2img_neg_prompt', 'txt2img_negative_token_counter', 'txt2img_negative_token_button')
- registerTextarea('img2img_prompt', 'img2img_token_counter', 'img2img_token_button')
- registerTextarea('img2img_neg_prompt', 'img2img_negative_token_counter', 'img2img_negative_token_button')
-
- show_all_pages = gradioApp().getElementById('settings_show_all_pages')
- settings_tabs = gradioApp().querySelector('#settings div')
- if(show_all_pages && settings_tabs){
- settings_tabs.appendChild(show_all_pages)
- show_all_pages.onclick = function(){
- gradioApp().querySelectorAll('#settings > div').forEach(function(elem){
- elem.style.display = "block";
- })
- }
- }
-})
-
-onOptionsChanged(function(){
- elem = gradioApp().getElementById('sd_checkpoint_hash')
- sd_checkpoint_hash = opts.sd_checkpoint_hash || ""
- shorthash = sd_checkpoint_hash.substr(0,10)
-
- if(elem && elem.textContent != shorthash){
- elem.textContent = shorthash
- elem.title = sd_checkpoint_hash
- elem.href = "https://google.com/search?q=" + sd_checkpoint_hash
- }
-})
-
-let txt2img_textarea, img2img_textarea = undefined;
-let wait_time = 800
-let token_timeouts = {};
-
-function update_txt2img_tokens(...args) {
- update_token_counter("txt2img_token_button")
- if (args.length == 2)
- return args[0]
- return args;
-}
-
-function update_img2img_tokens(...args) {
- update_token_counter("img2img_token_button")
- if (args.length == 2)
- return args[0]
- return args;
-}
-
-function update_token_counter(button_id) {
- if (token_timeouts[button_id])
- clearTimeout(token_timeouts[button_id]);
- token_timeouts[button_id] = setTimeout(() => gradioApp().getElementById(button_id)?.click(), wait_time);
-}
-
-function restart_reload(){
- document.body.innerHTML='Reloading... ';
- setTimeout(function(){location.reload()},2000)
-
- return []
-}
-
-// Simulate an `input` DOM event for Gradio Textbox component. Needed after you edit its contents in javascript, otherwise your edits
-// will only visible on web page and not sent to python.
-function updateInput(target){
- let e = new Event("input", { bubbles: true })
- Object.defineProperty(e, "target", {value: target})
- target.dispatchEvent(e);
-}
-
-
-var desiredCheckpointName = null;
-function selectCheckpoint(name){
- desiredCheckpointName = name;
- gradioApp().getElementById('change_checkpoint').click()
-}
diff --git a/spaces/binarycache/voice_to_image/Makefile b/spaces/binarycache/voice_to_image/Makefile
deleted file mode 100644
index 9e75a07104b1452e482d9b151e00fb30965b56e9..0000000000000000000000000000000000000000
--- a/spaces/binarycache/voice_to_image/Makefile
+++ /dev/null
@@ -1,27 +0,0 @@
-install:
- python.exe -m pip install --upgrade pip &&\
- pip install -r requirements.txt
-
-test:
- python -m pytest -vvv --cov=hello --cov=greeting \
- --cov=smath --cov=web tests
- python -m pytest --nbval notebook.ipynb #tests our jupyter notebook
- #python -m pytest -v tests/test_web.py #if you just want to test web
-
-debug:
- python -m pytest -vv --pdb #Debugger is invoked
-
-one-test:
- python -m pytest -vv tests/test_greeting.py::test_my_name4
-
-debugthree:
- #not working the way I expect
- python -m pytest -vv --pdb --maxfail=4 # drop to PDB for first three failures
-
-format:
- black *.py
-
-lint:
- pylint --disable=R,C *.py
-
-all: install lint test format
\ No newline at end of file
diff --git a/spaces/bingbing520/ChatGPT/modules/presets.py b/spaces/bingbing520/ChatGPT/modules/presets.py
deleted file mode 100644
index c941ff940016efa3c4f30b540aefe7ab03fbda7a..0000000000000000000000000000000000000000
--- a/spaces/bingbing520/ChatGPT/modules/presets.py
+++ /dev/null
@@ -1,222 +0,0 @@
-# -*- coding:utf-8 -*-
-import os
-from pathlib import Path
-import gradio as gr
-from .webui_locale import I18nAuto
-
-i18n = I18nAuto() # internationalization
-
-CHATGLM_MODEL = None
-CHATGLM_TOKENIZER = None
-LLAMA_MODEL = None
-LLAMA_INFERENCER = None
-
-# ChatGPT 设置
-INITIAL_SYSTEM_PROMPT = "You are a helpful assistant."
-API_HOST = "api.openai.com"
-COMPLETION_URL = "https://api.openai.com/v1/chat/completions"
-BALANCE_API_URL="https://api.openai.com/dashboard/billing/credit_grants"
-USAGE_API_URL="https://api.openai.com/dashboard/billing/usage"
-HISTORY_DIR = Path("history")
-HISTORY_DIR = "history"
-TEMPLATES_DIR = "templates"
-
-# 错误信息
-STANDARD_ERROR_MSG = i18n("☹️发生了错误:") # 错误信息的标准前缀
-GENERAL_ERROR_MSG = i18n("获取对话时发生错误,请查看后台日志")
-ERROR_RETRIEVE_MSG = i18n("请检查网络连接,或者API-Key是否有效。")
-CONNECTION_TIMEOUT_MSG = i18n("连接超时,无法获取对话。") # 连接超时
-READ_TIMEOUT_MSG = i18n("读取超时,无法获取对话。") # 读取超时
-PROXY_ERROR_MSG = i18n("代理错误,无法获取对话。") # 代理错误
-SSL_ERROR_PROMPT = i18n("SSL错误,无法获取对话。") # SSL 错误
-NO_APIKEY_MSG = i18n("API key为空,请检查是否输入正确。") # API key 长度不足 51 位
-NO_INPUT_MSG = i18n("请输入对话内容。") # 未输入对话内容
-BILLING_NOT_APPLICABLE_MSG = i18n("账单信息不适用") # 本地运行的模型返回的账单信息
-
-TIMEOUT_STREAMING = 60 # 流式对话时的超时时间
-TIMEOUT_ALL = 200 # 非流式对话时的超时时间
-ENABLE_STREAMING_OPTION = True # 是否启用选择选择是否实时显示回答的勾选框
-HIDE_MY_KEY = False # 如果你想在UI中隐藏你的 API 密钥,将此值设置为 True
-CONCURRENT_COUNT = 100 # 允许同时使用的用户数量
-
-SIM_K = 5
-INDEX_QUERY_TEMPRATURE = 1.0
-
-CHUANHU_TITLE = i18n("BingChat 🚀")
-
-CHUANHU_DESCRIPTION = i18n("")
-
-FOOTER = """{versions}
"""
-
-APPEARANCE_SWITCHER = """
-
-
"""+ i18n("切换亮暗色主题") + """
-
-
-
-
-
-"""
-
-SUMMARIZE_PROMPT = "你是谁?我们刚才聊了什么?" # 总结对话时的 prompt
-
-ONLINE_MODELS = [
- "gpt-3.5-turbo",
- "gpt-3.5-turbo-0301",
- "gpt-4",
- "gpt-4-0314",
- "gpt-4-32k",
- "gpt-4-32k-0314",
- "xmchat",
-]
-
-LOCAL_MODELS = [
- "chatglm-6b",
- "chatglm-6b-int4",
- "chatglm-6b-int4-qe",
- "llama-7b-hf",
- "llama-13b-hf",
- "llama-30b-hf",
- "llama-65b-hf"
-]
-
-if os.environ.get('HIDE_LOCAL_MODELS', 'false') == 'true':
- MODELS = ONLINE_MODELS
-else:
- MODELS = ONLINE_MODELS + LOCAL_MODELS
-
-DEFAULT_MODEL = 0
-
-os.makedirs("models", exist_ok=True)
-os.makedirs("lora", exist_ok=True)
-os.makedirs("history", exist_ok=True)
-for dir_name in os.listdir("models"):
- if os.path.isdir(os.path.join("models", dir_name)):
- if dir_name not in MODELS:
- MODELS.append(dir_name)
-
-MODEL_TOKEN_LIMIT = {
- "gpt-3.5-turbo": 4096,
- "gpt-3.5-turbo-0301": 4096,
- "gpt-4": 8192,
- "gpt-4-0314": 8192,
- "gpt-4-32k": 32768,
- "gpt-4-32k-0314": 32768
-}
-
-TOKEN_OFFSET = 1000 # 模型的token上限减去这个值,得到软上限。到达软上限之后,自动尝试减少token占用。
-DEFAULT_TOKEN_LIMIT = 3000 # 默认的token上限
-REDUCE_TOKEN_FACTOR = 0.5 # 与模型token上限想乘,得到目标token数。减少token占用时,将token占用减少到目标token数以下。
-
-REPLY_LANGUAGES = [
- "简体中文",
- "繁體中文",
- "English",
- "日本語",
- "Español",
- "Français",
- "Deutsch",
- "跟随问题语言(不稳定)"
-]
-
-
-WEBSEARCH_PTOMPT_TEMPLATE = """\
-Web search results:
-
-{web_results}
-Current date: {current_date}
-
-Instructions: Using the provided web search results, write a comprehensive reply to the given query. Make sure to cite results using [[number](URL)] notation after the reference. If the provided search results refer to multiple subjects with the same name, write separate answers for each subject.
-Query: {query}
-Reply in {reply_language}
-"""
-
-PROMPT_TEMPLATE = """\
-Context information is below.
----------------------
-{context_str}
----------------------
-Current date: {current_date}.
-Using the provided context information, write a comprehensive reply to the given query.
-Make sure to cite results using [number] notation after the reference.
-If the provided context information refer to multiple subjects with the same name, write separate answers for each subject.
-Use prior knowledge only if the given context didn't provide enough information.
-Answer the question: {query_str}
-Reply in {reply_language}
-"""
-
-REFINE_TEMPLATE = """\
-The original question is as follows: {query_str}
-We have provided an existing answer: {existing_answer}
-We have the opportunity to refine the existing answer
-(only if needed) with some more context below.
-------------
-{context_msg}
-------------
-Given the new context, refine the original answer to better
-Reply in {reply_language}
-If the context isn't useful, return the original answer.
-"""
-
-ALREADY_CONVERTED_MARK = ""
-
-small_and_beautiful_theme = gr.themes.Soft(
- primary_hue=gr.themes.Color(
- c50="#02C160",
- c100="rgba(2, 193, 96, 0.2)",
- c200="#02C160",
- c300="rgba(2, 193, 96, 0.32)",
- c400="rgba(2, 193, 96, 0.32)",
- c500="rgba(2, 193, 96, 1.0)",
- c600="rgba(2, 193, 96, 1.0)",
- c700="rgba(2, 193, 96, 0.32)",
- c800="rgba(2, 193, 96, 0.32)",
- c900="#02C160",
- c950="#02C160",
- ),
- secondary_hue=gr.themes.Color(
- c50="#576b95",
- c100="#576b95",
- c200="#576b95",
- c300="#576b95",
- c400="#576b95",
- c500="#576b95",
- c600="#576b95",
- c700="#576b95",
- c800="#576b95",
- c900="#576b95",
- c950="#576b95",
- ),
- neutral_hue=gr.themes.Color(
- name="gray",
- c50="#f9fafb",
- c100="#f3f4f6",
- c200="#e5e7eb",
- c300="#d1d5db",
- c400="#B2B2B2",
- c500="#808080",
- c600="#636363",
- c700="#515151",
- c800="#393939",
- c900="#272727",
- c950="#171717",
- ),
- radius_size=gr.themes.sizes.radius_sm,
- ).set(
- button_primary_background_fill="#06AE56",
- button_primary_background_fill_dark="#06AE56",
- button_primary_background_fill_hover="#07C863",
- button_primary_border_color="#06AE56",
- button_primary_border_color_dark="#06AE56",
- button_primary_text_color="#FFFFFF",
- button_primary_text_color_dark="#FFFFFF",
- button_secondary_background_fill="#F2F2F2",
- button_secondary_background_fill_dark="#2B2B2B",
- button_secondary_text_color="#393939",
- button_secondary_text_color_dark="#FFFFFF",
- # background_fill_primary="#F7F7F7",
- # background_fill_primary_dark="#1F1F1F",
- block_title_text_color="*primary_500",
- block_title_background_fill="*primary_100",
- input_background_fill="#F6F6F6",
- )
diff --git a/spaces/bioriAsaeru/text-to-voice/Free Download Mchillipepper Gta San Andreas Samp C carnet baiser medici What is it and Why You Should Try it.md b/spaces/bioriAsaeru/text-to-voice/Free Download Mchillipepper Gta San Andreas Samp C carnet baiser medici What is it and Why You Should Try it.md
deleted file mode 100644
index ac048bfa92fe8f00fb5c80f5ff7c65b865f683f3..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/Free Download Mchillipepper Gta San Andreas Samp C carnet baiser medici What is it and Why You Should Try it.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Free Download Mchillipepper Gta San Andreas Samp C carnet baiser medici Download File ⚹ https://urloso.com/2uyPNx
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/bioriAsaeru/text-to-voice/How to Use Autodesk ImageModeler 2009 SP1 build for Photorealistic Reconstruction.md b/spaces/bioriAsaeru/text-to-voice/How to Use Autodesk ImageModeler 2009 SP1 build for Photorealistic Reconstruction.md
deleted file mode 100644
index 25092b6bd2c642b4ff25430c91a5bf742a6dd149..0000000000000000000000000000000000000000
--- a/spaces/bioriAsaeru/text-to-voice/How to Use Autodesk ImageModeler 2009 SP1 build for Photorealistic Reconstruction.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-Autocad 2017 running on all supported operating systems and languages.service pack to the.autodesk revit 2017 service pack 1. Obtained from improvements included in.service pack to the following autodesk products running on all.the program automates most of the stages of the.users can now use interactive 3d cad vivotek camera models directly in their building plans and. Autodesk imagemodeler.note.autocad 2017 sp1 32 bit exekb.autodesk stitcher unlimited 2009 build 61.autodesk imagemodeler 2009 sp1 x86 build autodesk imagemodeler 2009 software generates 3d models from 2d digital images, giving architects, designers, and.consult the enhancements documentation for areas improved by this update. This.autodesk 3ds max design 2015 service pack 1 and security fix. Please verify.build 61.autodesk builds software that helps people imagine, design, and create a better world.this service pack.
-Autodesk ImageModeler 2009 SP1 build Download File > https://urloso.com/2uyPhN
-Can be applied to autocad 2017 installed as a standalone.sign up, it unlocks. Autodesk.imagemodeler.2009.sp1.x86.build. nope.rar.look at most relevant autodesk imagemodeler 2009 sp1 build x86 websites out of 15 at keyoptimize. Autodesk imagemodeler 2009 sp1 build. 2017.you can apply this update release to autodesk revit 2017 running on all.free shipping on qualified orders.download realviz imagemodeler build or any other.autodesk inventor 2017 build 142. Windows 7 sp1 64 bit: windows update is in the control panel, which is accessible from the start menu.autodesk imagemodeler 2009 sp1 build torrent download locations. Jekyll.and.hyde 1s, bairavaa tamil movies s, the water horse tamil 0s,.we would like to show you a description here but the site wont allow us.autodesk stitcher unlimited 2009.
-.on demand webinar: the presentation lifecyclehow to create,.autodesk imagemodeler 2009 image based modeling software is not available as a stand alone product.autocad 2017 service pack 1. Autodesk autocad with advance steel.general. The sp1build 196 installs on autodesk inventor.autocad 2017 service pack 1. Download. You can apply this update to.this update release addresses issues reported to autodesk against autodesk revit 2017.imagemodeler is available with some autodesk products with.this readme contains the latest information regarding the installation and use of this service pack.autodesk imagemodeler 2009 sp1 32bit.update required revit state to apply update release date update build versio.software per progettazione 3d, ingegneria e intrattenimento.look at most relevant autodesk imagemodeler 2009 sp1 x86 build websites out of 34.1 thousand at keyoptimize. Autodesk imagemodeler 2009 sp1 x86 build .
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/backbone/build.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/backbone/build.py
deleted file mode 100644
index af02141172bebe9a2a27a88c81673c2710b4d73f..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/modeling/backbone/build.py
+++ /dev/null
@@ -1,33 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-from detectron2.layers import ShapeSpec
-from detectron2.utils.registry import Registry
-
-from .backbone import Backbone
-
-BACKBONE_REGISTRY = Registry("BACKBONE")
-BACKBONE_REGISTRY.__doc__ = """
-Registry for backbones, which extract feature maps from images
-
-The registered object must be a callable that accepts two arguments:
-
-1. A :class:`detectron2.config.CfgNode`
-2. A :class:`detectron2.layers.ShapeSpec`, which contains the input shape specification.
-
-Registered object must return instance of :class:`Backbone`.
-"""
-
-
-def build_backbone(cfg, input_shape=None):
- """
- Build a backbone from `cfg.MODEL.BACKBONE.NAME`.
-
- Returns:
- an instance of :class:`Backbone`
- """
- if input_shape is None:
- input_shape = ShapeSpec(channels=len(cfg.MODEL.PIXEL_MEAN))
-
- backbone_name = cfg.MODEL.BACKBONE.NAME
- backbone = BACKBONE_REGISTRY.get(backbone_name)(cfg, input_shape)
- assert isinstance(backbone, Backbone)
- return backbone
diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/structures/chart_confidence.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/structures/chart_confidence.py
deleted file mode 100644
index 57c63257a7c176af1522e2f143ed594c26906c76..0000000000000000000000000000000000000000
--- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/structures/chart_confidence.py
+++ /dev/null
@@ -1,98 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-
-from dataclasses import make_dataclass
-from functools import lru_cache
-from typing import Any, Optional
-import torch
-
-
-@lru_cache(maxsize=None)
-def decorate_predictor_output_class_with_confidences(BasePredictorOutput: type) -> type:
- """
- Create a new output class from an existing one by adding new attributes
- related to confidence estimation:
- - sigma_1 (tensor)
- - sigma_2 (tensor)
- - kappa_u (tensor)
- - kappa_v (tensor)
- - fine_segm_confidence (tensor)
- - coarse_segm_confidence (tensor)
-
- Details on confidence estimation parameters can be found in:
- N. Neverova, D. Novotny, A. Vedaldi "Correlated Uncertainty for Learning
- Dense Correspondences from Noisy Labels", p. 918--926, in Proc. NIPS 2019
- A. Sanakoyeu et al., Transferring Dense Pose to Proximal Animal Classes, CVPR 2020
-
- The new class inherits the provided `BasePredictorOutput` class,
- it's name is composed of the name of the provided class and
- "WithConfidences" suffix.
-
- Args:
- BasePredictorOutput (type): output type to which confidence data
- is to be added, assumed to be a dataclass
- Return:
- New dataclass derived from the provided one that has attributes
- for confidence estimation
- """
-
- PredictorOutput = make_dataclass(
- BasePredictorOutput.__name__ + "WithConfidences",
- fields=[
- ("sigma_1", Optional[torch.Tensor], None),
- ("sigma_2", Optional[torch.Tensor], None),
- ("kappa_u", Optional[torch.Tensor], None),
- ("kappa_v", Optional[torch.Tensor], None),
- ("fine_segm_confidence", Optional[torch.Tensor], None),
- ("coarse_segm_confidence", Optional[torch.Tensor], None),
- ],
- bases=(BasePredictorOutput,),
- )
-
- # add possibility to index PredictorOutput
-
- def slice_if_not_none(data, item):
- if data is None:
- return None
- if isinstance(item, int):
- return data[item].unsqueeze(0)
- return data[item]
-
- def PredictorOutput_getitem(self, item):
- PredictorOutput = type(self)
- base_predictor_output_sliced = super(PredictorOutput, self).__getitem__(item)
- return PredictorOutput(
- **base_predictor_output_sliced.__dict__,
- coarse_segm_confidence=slice_if_not_none(self.coarse_segm_confidence, item),
- fine_segm_confidence=slice_if_not_none(self.fine_segm_confidence, item),
- sigma_1=slice_if_not_none(self.sigma_1, item),
- sigma_2=slice_if_not_none(self.sigma_2, item),
- kappa_u=slice_if_not_none(self.kappa_u, item),
- kappa_v=slice_if_not_none(self.kappa_v, item),
- )
-
- PredictorOutput.__getitem__ = PredictorOutput_getitem
-
- def PredictorOutput_to(self, device: torch.device):
- """
- Transfers all tensors to the given device
- """
- PredictorOutput = type(self)
- base_predictor_output_to = super(PredictorOutput, self).to(device) # pyre-ignore[16]
-
- def to_device_if_tensor(var: Any):
- if isinstance(var, torch.Tensor):
- return var.to(device)
- return var
-
- return PredictorOutput(
- **base_predictor_output_to.__dict__,
- sigma_1=to_device_if_tensor(self.sigma_1),
- sigma_2=to_device_if_tensor(self.sigma_2),
- kappa_u=to_device_if_tensor(self.kappa_u),
- kappa_v=to_device_if_tensor(self.kappa_v),
- fine_segm_confidence=to_device_if_tensor(self.fine_segm_confidence),
- coarse_segm_confidence=to_device_if_tensor(self.coarse_segm_confidence),
- )
-
- PredictorOutput.to = PredictorOutput_to
- return PredictorOutput
diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImtImagePlugin.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImtImagePlugin.py
deleted file mode 100644
index ac267457b0682a975a1a33da475c96531c398bd7..0000000000000000000000000000000000000000
--- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImtImagePlugin.py
+++ /dev/null
@@ -1,101 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# IM Tools support for PIL
-#
-# history:
-# 1996-05-27 fl Created (read 8-bit images only)
-# 2001-02-17 fl Use 're' instead of 'regex' (Python 2.1) (0.2)
-#
-# Copyright (c) Secret Labs AB 1997-2001.
-# Copyright (c) Fredrik Lundh 1996-2001.
-#
-# See the README file for information on usage and redistribution.
-#
-
-
-import re
-
-from . import Image, ImageFile
-
-#
-# --------------------------------------------------------------------
-
-field = re.compile(rb"([a-z]*) ([^ \r\n]*)")
-
-
-##
-# Image plugin for IM Tools images.
-
-
-class ImtImageFile(ImageFile.ImageFile):
- format = "IMT"
- format_description = "IM Tools"
-
- def _open(self):
- # Quick rejection: if there's not a LF among the first
- # 100 bytes, this is (probably) not a text header.
-
- buffer = self.fp.read(100)
- if b"\n" not in buffer:
- msg = "not an IM file"
- raise SyntaxError(msg)
-
- xsize = ysize = 0
-
- while True:
- if buffer:
- s = buffer[:1]
- buffer = buffer[1:]
- else:
- s = self.fp.read(1)
- if not s:
- break
-
- if s == b"\x0C":
- # image data begins
- self.tile = [
- (
- "raw",
- (0, 0) + self.size,
- self.fp.tell() - len(buffer),
- (self.mode, 0, 1),
- )
- ]
-
- break
-
- else:
- # read key/value pair
- if b"\n" not in buffer:
- buffer += self.fp.read(100)
- lines = buffer.split(b"\n")
- s += lines.pop(0)
- buffer = b"\n".join(lines)
- if len(s) == 1 or len(s) > 100:
- break
- if s[0] == ord(b"*"):
- continue # comment
-
- m = field.match(s)
- if not m:
- break
- k, v = m.group(1, 2)
- if k == b"width":
- xsize = int(v)
- self._size = xsize, ysize
- elif k == b"height":
- ysize = int(v)
- self._size = xsize, ysize
- elif k == b"pixel" and v == b"n8":
- self.mode = "L"
-
-
-#
-# --------------------------------------------------------------------
-
-Image.register_open(ImtImageFile.format, ImtImageFile)
-
-#
-# no extension registered (".im" is simply too common)
diff --git a/spaces/caojiachen1/ChatGPT/crazy_functions/test_project/latex/attention/parameter_attention.tex b/spaces/caojiachen1/ChatGPT/crazy_functions/test_project/latex/attention/parameter_attention.tex
deleted file mode 100644
index 7bc4fe452dbdbfe44ff72f0cdbd37acd5c786ce6..0000000000000000000000000000000000000000
--- a/spaces/caojiachen1/ChatGPT/crazy_functions/test_project/latex/attention/parameter_attention.tex
+++ /dev/null
@@ -1,45 +0,0 @@
-\pagebreak
-\section*{Two Feed-Forward Layers = Attention over Parameters}\label{sec:parameter_attention}
-
-In addition to attention layers, our model contains position-wise feed-forward networks (Section \ref{sec:ffn}), which consist of two linear transformations with a ReLU activation in between. In fact, these networks too can be seen as a form of attention. Compare the formula for such a network with the formula for a simple dot-product attention layer (biases and scaling factors omitted):
-
-\begin{align*}
- FFN(x, W_1, W_2) = ReLU(xW_1)W_2 \\
- A(q, K, V) = Softmax(qK^T)V
-\end{align*}
-
-Based on the similarity of these formulae, the two-layer feed-forward network can be seen as a kind of attention, where the keys and values are the rows of the trainable parameter matrices $W_1$ and $W_2$, and where we use ReLU instead of Softmax in the compatibility function.
-
-%the compatablity function is $compat(q, k_i) = ReLU(q \cdot k_i)$ instead of $Softmax(qK_T)_i$.
-
-Given this similarity, we experimented with replacing the position-wise feed-forward networks with attention layers similar to the ones we use everywhere else our model. The multi-head-attention-over-parameters sublayer is identical to the multi-head attention described in \ref{sec:multihead}, except that the "keys" and "values" inputs to each attention head are trainable model parameters, as opposed to being linear projections of a previous layer. These parameters are scaled up by a factor of $\sqrt{d_{model}}$ in order to be more similar to activations.
-
-In our first experiment, we replaced each position-wise feed-forward network with a multi-head-attention-over-parameters sublayer with $h_p=8$ heads, key-dimensionality $d_{pk}=64$, and value-dimensionality $d_{pv}=64$, using $n_p=1536$ key-value pairs for each attention head. The sublayer has a total of $2097152$ parameters, including the parameters in the query projection and the output projection. This matches the number of parameters in the position-wise feed-forward network that we replaced. While the theoretical amount of computation is also the same, in practice, the attention version caused the step times to be about 30\% longer.
-
-In our second experiment, we used $h_p=8$ heads, and $n_p=512$ key-value pairs for each attention head, again matching the total number of parameters in the base model.
-
-Results for the first experiment were slightly worse than for the base model, and results for the second experiment were slightly better, see Table~\ref{tab:parameter_attention}.
-
-\begin{table}[h]
-\caption{Replacing the position-wise feed-forward networks with multihead-attention-over-parameters produces similar results to the base model. All metrics are on the English-to-German translation development set, newstest2013.}
-\label{tab:parameter_attention}
-\begin{center}
-\vspace{-2mm}
-%\scalebox{1.0}{
-\begin{tabular}{c|cccccc|cccc}
-\hline\rule{0pt}{2.0ex}
- & \multirow{2}{*}{$\dmodel$} & \multirow{2}{*}{$\dff$} &
-\multirow{2}{*}{$h_p$} & \multirow{2}{*}{$d_{pk}$} & \multirow{2}{*}{$d_{pv}$} &
- \multirow{2}{*}{$n_p$} &
- PPL & BLEU & params & training\\
- & & & & & & & (dev) & (dev) & $\times10^6$ & time \\
-\hline\rule{0pt}{2.0ex}
-base & 512 & 2048 & & & & & 4.92 & 25.8 & 65 & 12 hours\\
-\hline\rule{0pt}{2.0ex}
-AOP$_1$ & 512 & & 8 & 64 & 64 & 1536 & 4.92& 25.5 & 65 & 16 hours\\
-AOP$_2$ & 512 & & 16 & 64 & 64 & 512 & \textbf{4.86} & \textbf{25.9} & 65 & 16 hours \\
-\hline
-\end{tabular}
-%}
-\end{center}
-\end{table}
diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/data/test_sampler.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/data/test_sampler.py
deleted file mode 100644
index 0d2784390801314862524e1b85703535d199e41d..0000000000000000000000000000000000000000
--- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/data/test_sampler.py
+++ /dev/null
@@ -1,111 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import itertools
-import math
-import operator
-import unittest
-import torch
-from torch.utils import data
-from torch.utils.data.sampler import SequentialSampler
-
-from detectron2.data.build import worker_init_reset_seed
-from detectron2.data.common import DatasetFromList, ToIterableDataset
-from detectron2.data.samplers import (
- GroupedBatchSampler,
- InferenceSampler,
- RepeatFactorTrainingSampler,
- TrainingSampler,
-)
-from detectron2.utils.env import seed_all_rng
-
-
-class TestGroupedBatchSampler(unittest.TestCase):
- def test_missing_group_id(self):
- sampler = SequentialSampler(list(range(100)))
- group_ids = [1] * 100
- samples = GroupedBatchSampler(sampler, group_ids, 2)
-
- for mini_batch in samples:
- self.assertEqual(len(mini_batch), 2)
-
- def test_groups(self):
- sampler = SequentialSampler(list(range(100)))
- group_ids = [1, 0] * 50
- samples = GroupedBatchSampler(sampler, group_ids, 2)
-
- for mini_batch in samples:
- self.assertEqual((mini_batch[0] + mini_batch[1]) % 2, 0)
-
-
-class TestSamplerDeterministic(unittest.TestCase):
- def test_to_iterable(self):
- sampler = TrainingSampler(100, seed=10)
- gt_output = list(itertools.islice(sampler, 100))
- self.assertEqual(set(gt_output), set(range(100)))
-
- dataset = DatasetFromList(list(range(100)))
- dataset = ToIterableDataset(dataset, sampler)
- data_loader = data.DataLoader(dataset, num_workers=0, collate_fn=operator.itemgetter(0))
-
- output = list(itertools.islice(data_loader, 100))
- self.assertEqual(output, gt_output)
-
- data_loader = data.DataLoader(
- dataset,
- num_workers=2,
- collate_fn=operator.itemgetter(0),
- worker_init_fn=worker_init_reset_seed,
- # reset seed should not affect behavior of TrainingSampler
- )
- output = list(itertools.islice(data_loader, 100))
- # multiple workers should not lead to duplicate or different data
- self.assertEqual(output, gt_output)
-
- def test_training_sampler_seed(self):
- seed_all_rng(42)
- sampler = TrainingSampler(30)
- data = list(itertools.islice(sampler, 65))
-
- seed_all_rng(42)
- sampler = TrainingSampler(30)
- seed_all_rng(999) # should be ineffective
- data2 = list(itertools.islice(sampler, 65))
- self.assertEqual(data, data2)
-
-
-class TestRepeatFactorTrainingSampler(unittest.TestCase):
- def test_repeat_factors_from_category_frequency(self):
- repeat_thresh = 0.5
-
- dataset_dicts = [
- {"annotations": [{"category_id": 0}, {"category_id": 1}]},
- {"annotations": [{"category_id": 0}]},
- {"annotations": []},
- ]
-
- rep_factors = RepeatFactorTrainingSampler.repeat_factors_from_category_frequency(
- dataset_dicts, repeat_thresh
- )
-
- expected_rep_factors = torch.tensor([math.sqrt(3 / 2), 1.0, 1.0])
- self.assertTrue(torch.allclose(rep_factors, expected_rep_factors))
-
-
-class TestInferenceSampler(unittest.TestCase):
- def test_local_indices(self):
- sizes = [0, 16, 2, 42]
- world_sizes = [5, 2, 3, 4]
-
- expected_results = [
- [range(0) for _ in range(5)],
- [range(8), range(8, 16)],
- [range(1), range(1, 2), range(0)],
- [range(11), range(11, 22), range(22, 32), range(32, 42)],
- ]
-
- for size, world_size, expected_result in zip(sizes, world_sizes, expected_results):
- with self.subTest(f"size={size}, world_size={world_size}"):
- local_indices = [
- InferenceSampler._get_local_indices(size, world_size, r)
- for r in range(world_size)
- ]
- self.assertEqual(local_indices, expected_result)
diff --git a/spaces/ccarr0807/HuggingGPT/awesome_chat.py b/spaces/ccarr0807/HuggingGPT/awesome_chat.py
deleted file mode 100644
index f9c880ba2491649ba185624a3ace0c6e4462a276..0000000000000000000000000000000000000000
--- a/spaces/ccarr0807/HuggingGPT/awesome_chat.py
+++ /dev/null
@@ -1,920 +0,0 @@
-import base64
-import copy
-from io import BytesIO
-import io
-import os
-import random
-import time
-import traceback
-import uuid
-import requests
-import re
-import json
-import logging
-import argparse
-import yaml
-from PIL import Image, ImageDraw
-from diffusers.utils import load_image
-from pydub import AudioSegment
-import threading
-from queue import Queue
-from get_token_ids import get_token_ids_for_task_parsing, get_token_ids_for_choose_model, count_tokens, get_max_context_length
-from huggingface_hub.inference_api import InferenceApi
-from huggingface_hub.inference_api import ALL_TASKS
-from models_server import models, status
-from functools import partial
-
-parser = argparse.ArgumentParser()
-parser.add_argument("--config", type=str, default="config.yaml.dev")
-parser.add_argument("--mode", type=str, default="cli")
-args = parser.parse_args()
-
-if __name__ != "__main__":
- args.config = "config.gradio.yaml"
-
-config = yaml.load(open(args.config, "r"), Loader=yaml.FullLoader)
-
-if not os.path.exists("logs"):
- os.mkdir("logs")
-
-logger = logging.getLogger(__name__)
-logger.setLevel(logging.DEBUG)
-
-handler = logging.StreamHandler()
-formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
-handler.setFormatter(formatter)
-if not config["debug"]:
- handler.setLevel(logging.INFO)
-logger.addHandler(handler)
-
-log_file = config["log_file"]
-if log_file:
- filehandler = logging.FileHandler(log_file)
- filehandler.setLevel(logging.DEBUG)
- filehandler.setFormatter(formatter)
- logger.addHandler(filehandler)
-
-LLM = config["model"]
-use_completion = config["use_completion"]
-
-# consistent: wrong msra model name
-LLM_encoding = LLM
-if LLM == "gpt-3.5-turbo":
- LLM_encoding = "text-davinci-003"
-task_parsing_highlight_ids = get_token_ids_for_task_parsing(LLM_encoding)
-choose_model_highlight_ids = get_token_ids_for_choose_model(LLM_encoding)
-
-# ENDPOINT MODEL NAME
-# /v1/chat/completions gpt-4, gpt-4-0314, gpt-4-32k, gpt-4-32k-0314, gpt-3.5-turbo, gpt-3.5-turbo-0301
-# /v1/completions text-davinci-003, text-davinci-002, text-curie-001, text-babbage-001, text-ada-001, davinci, curie, babbage, ada
-
-if use_completion:
- api_name = "completions"
-else:
- api_name = "chat/completions"
-
-if not config["dev"]:
- if not config["openai"]["key"].startswith("sk-") and not config["openai"]["key"]=="gradio":
- raise ValueError("Incrorrect OpenAI key. Please check your config.yaml file.")
- OPENAI_KEY = config["openai"]["key"]
- endpoint = f"https://api.openai.com/v1/{api_name}"
- if OPENAI_KEY.startswith("sk-"):
- HEADER = {
- "Authorization": f"Bearer {OPENAI_KEY}"
- }
- else:
- HEADER = None
-else:
- endpoint = f"{config['local']['endpoint']}/v1/{api_name}"
- HEADER = None
-
-PROXY = None
-if config["proxy"]:
- PROXY = {
- "https": config["proxy"],
- }
-
-inference_mode = config["inference_mode"]
-
-parse_task_demos_or_presteps = open(config["demos_or_presteps"]["parse_task"], "r").read()
-choose_model_demos_or_presteps = open(config["demos_or_presteps"]["choose_model"], "r").read()
-response_results_demos_or_presteps = open(config["demos_or_presteps"]["response_results"], "r").read()
-
-parse_task_prompt = config["prompt"]["parse_task"]
-choose_model_prompt = config["prompt"]["choose_model"]
-response_results_prompt = config["prompt"]["response_results"]
-
-parse_task_tprompt = config["tprompt"]["parse_task"]
-choose_model_tprompt = config["tprompt"]["choose_model"]
-response_results_tprompt = config["tprompt"]["response_results"]
-
-MODELS = [json.loads(line) for line in open("data/p0_models.jsonl", "r").readlines()]
-MODELS_MAP = {}
-for model in MODELS:
- tag = model["task"]
- if tag not in MODELS_MAP:
- MODELS_MAP[tag] = []
- MODELS_MAP[tag].append(model)
-METADATAS = {}
-for model in MODELS:
- METADATAS[model["id"]] = model
-
-def convert_chat_to_completion(data):
- messages = data.pop('messages', [])
- tprompt = ""
- if messages[0]['role'] == "system":
- tprompt = messages[0]['content']
- messages = messages[1:]
- final_prompt = ""
- for message in messages:
- if message['role'] == "user":
- final_prompt += (""+ "user" + "\n" + message['content'] + "\n")
- elif message['role'] == "assistant":
- final_prompt += (""+ "assistant" + "\n" + message['content'] + "\n")
- else:
- final_prompt += (""+ "system" + "\n" + message['content'] + "\n")
- final_prompt = tprompt + final_prompt
- final_prompt = final_prompt + "assistant"
- data["prompt"] = final_prompt
- data['stop'] = data.get('stop', [""])
- data['max_tokens'] = data.get('max_tokens', max(get_max_context_length(LLM) - count_tokens(LLM_encoding, final_prompt), 1))
- return data
-
-def send_request(data):
- global HEADER
- openaikey = data.pop("openaikey")
- if use_completion:
- data = convert_chat_to_completion(data)
- if openaikey and openaikey.startswith("sk-"):
- HEADER = {
- "Authorization": f"Bearer {openaikey}"
- }
-
- response = requests.post(endpoint, json=data, headers=HEADER, proxies=PROXY)
- logger.debug(response.text.strip())
- if "choices" not in response.json():
- return response.json()
- if use_completion:
- return response.json()["choices"][0]["text"].strip()
- else:
- return response.json()["choices"][0]["message"]["content"].strip()
-
-def replace_slot(text, entries):
- for key, value in entries.items():
- if not isinstance(value, str):
- value = str(value)
- text = text.replace("{{" + key +"}}", value.replace('"', "'").replace('\n', ""))
- return text
-
-def find_json(s):
- s = s.replace("\'", "\"")
- start = s.find("{")
- end = s.rfind("}")
- res = s[start:end+1]
- res = res.replace("\n", "")
- return res
-
-def field_extract(s, field):
- try:
- field_rep = re.compile(f'{field}.*?:.*?"(.*?)"', re.IGNORECASE)
- extracted = field_rep.search(s).group(1).replace("\"", "\'")
- except:
- field_rep = re.compile(f'{field}:\ *"(.*?)"', re.IGNORECASE)
- extracted = field_rep.search(s).group(1).replace("\"", "\'")
- return extracted
-
-def get_id_reason(choose_str):
- reason = field_extract(choose_str, "reason")
- id = field_extract(choose_str, "id")
- choose = {"id": id, "reason": reason}
- return id.strip(), reason.strip(), choose
-
-def record_case(success, **args):
- if success:
- f = open("logs/log_success.jsonl", "a")
- else:
- f = open("logs/log_fail.jsonl", "a")
- log = args
- f.write(json.dumps(log) + "\n")
- f.close()
-
-def image_to_bytes(img_url):
- img_byte = io.BytesIO()
- type = img_url.split(".")[-1]
- load_image(img_url).save(img_byte, format="png")
- img_data = img_byte.getvalue()
- return img_data
-
-def resource_has_dep(command):
- args = command["args"]
- for _, v in args.items():
- if "" in v:
- return True
- return False
-
-def fix_dep(tasks):
- for task in tasks:
- args = task["args"]
- task["dep"] = []
- for k, v in args.items():
- if "" in v:
- dep_task_id = int(v.split("-")[1])
- if dep_task_id not in task["dep"]:
- task["dep"].append(dep_task_id)
- if len(task["dep"]) == 0:
- task["dep"] = [-1]
- return tasks
-
-def unfold(tasks):
- flag_unfold_task = False
- try:
- for task in tasks:
- for key, value in task["args"].items():
- if "" in value:
- generated_items = value.split(",")
- if len(generated_items) > 1:
- flag_unfold_task = True
- for item in generated_items:
- new_task = copy.deepcopy(task)
- dep_task_id = int(item.split("-")[1])
- new_task["dep"] = [dep_task_id]
- new_task["args"][key] = item
- tasks.append(new_task)
- tasks.remove(task)
- except Exception as e:
- print(e)
- traceback.print_exc()
- logger.debug("unfold task failed.")
-
- if flag_unfold_task:
- logger.debug(f"unfold tasks: {tasks}")
-
- return tasks
-
-def chitchat(messages, openaikey=None):
- data = {
- "model": LLM,
- "messages": messages,
- "openaikey": openaikey
- }
- return send_request(data)
-
-def parse_task(context, input, openaikey=None):
- demos_or_presteps = parse_task_demos_or_presteps
- messages = json.loads(demos_or_presteps)
- messages.insert(0, {"role": "system", "content": parse_task_tprompt})
-
- # cut chat logs
- start = 0
- while start <= len(context):
- history = context[start:]
- prompt = replace_slot(parse_task_prompt, {
- "input": input,
- "context": history
- })
- messages.append({"role": "user", "content": prompt})
- history_text = "\nuser".join([m["content"] for m in messages])
- num = count_tokens(LLM_encoding, history_text)
- if get_max_context_length(LLM) - num > 800:
- break
- messages.pop()
- start += 2
-
- logger.debug(messages)
- data = {
- "model": LLM,
- "messages": messages,
- "temperature": 0,
- "logit_bias": {item: config["logit_bias"]["parse_task"] for item in task_parsing_highlight_ids},
- "openaikey": openaikey
- }
- return send_request(data)
-
-def choose_model(input, task, metas, openaikey = None):
- prompt = replace_slot(choose_model_prompt, {
- "input": input,
- "task": task,
- "metas": metas,
- })
- demos_or_presteps = replace_slot(choose_model_demos_or_presteps, {
- "input": input,
- "task": task,
- "metas": metas
- })
- messages = json.loads(demos_or_presteps)
- messages.insert(0, {"role": "system", "content": choose_model_tprompt})
- messages.append({"role": "user", "content": prompt})
- logger.debug(messages)
- data = {
- "model": LLM,
- "messages": messages,
- "temperature": 0,
- "logit_bias": {item: config["logit_bias"]["choose_model"] for item in choose_model_highlight_ids}, # 5
- "openaikey": openaikey
- }
- return send_request(data)
-
-
-def response_results(input, results, openaikey=None):
- results = [v for k, v in sorted(results.items(), key=lambda item: item[0])]
- prompt = replace_slot(response_results_prompt, {
- "input": input,
- })
- demos_or_presteps = replace_slot(response_results_demos_or_presteps, {
- "input": input,
- "processes": results
- })
- messages = json.loads(demos_or_presteps)
- messages.insert(0, {"role": "system", "content": response_results_tprompt})
- messages.append({"role": "user", "content": prompt})
- logger.debug(messages)
- data = {
- "model": LLM,
- "messages": messages,
- "temperature": 0,
- "openaikey": openaikey
- }
- return send_request(data)
-
-def huggingface_model_inference(model_id, data, task, huggingfacetoken=None):
- if huggingfacetoken is None:
- HUGGINGFACE_HEADERS = {}
- else:
- HUGGINGFACE_HEADERS = {
- "Authorization": f"Bearer {huggingfacetoken}",
- }
- task_url = f"https://api-inference.huggingface.co/models/{model_id}" # InferenceApi does not yet support some tasks
- inference = InferenceApi(repo_id=model_id, token=huggingfacetoken)
-
- # NLP tasks
- if task == "question-answering":
- inputs = {"question": data["text"], "context": (data["context"] if "context" in data else "" )}
- result = inference(inputs)
- if task == "sentence-similarity":
- inputs = {"source_sentence": data["text1"], "target_sentence": data["text2"]}
- result = inference(inputs)
- if task in ["text-classification", "token-classification", "text2text-generation", "summarization", "translation", "conversational", "text-generation"]:
- inputs = data["text"]
- result = inference(inputs)
-
- # CV tasks
- if task == "visual-question-answering" or task == "document-question-answering":
- img_url = data["image"]
- text = data["text"]
- img_data = image_to_bytes(img_url)
- img_base64 = base64.b64encode(img_data).decode("utf-8")
- json_data = {}
- json_data["inputs"] = {}
- json_data["inputs"]["question"] = text
- json_data["inputs"]["image"] = img_base64
- result = requests.post(task_url, headers=HUGGINGFACE_HEADERS, json=json_data).json()
- # result = inference(inputs) # not support
-
- if task == "image-to-image":
- img_url = data["image"]
- img_data = image_to_bytes(img_url)
- # result = inference(data=img_data) # not support
- HUGGINGFACE_HEADERS["Content-Length"] = str(len(img_data))
- r = requests.post(task_url, headers=HUGGINGFACE_HEADERS, data=img_data)
- result = r.json()
- if "path" in result:
- result["generated image"] = result.pop("path")
-
- if task == "text-to-image":
- inputs = data["text"]
- img = inference(inputs)
- name = str(uuid.uuid4())[:4]
- img.save(f"public/images/{name}.png")
- result = {}
- result["generated image"] = f"/images/{name}.png"
-
- if task == "image-segmentation":
- img_url = data["image"]
- img_data = image_to_bytes(img_url)
- image = Image.open(BytesIO(img_data))
- predicted = inference(data=img_data)
- colors = []
- for i in range(len(predicted)):
- colors.append((random.randint(100, 255), random.randint(100, 255), random.randint(100, 255), 155))
- for i, pred in enumerate(predicted):
- label = pred["label"]
- mask = pred.pop("mask").encode("utf-8")
- mask = base64.b64decode(mask)
- mask = Image.open(BytesIO(mask), mode='r')
- mask = mask.convert('L')
-
- layer = Image.new('RGBA', mask.size, colors[i])
- image.paste(layer, (0, 0), mask)
- name = str(uuid.uuid4())[:4]
- image.save(f"public/images/{name}.jpg")
- result = {}
- result["generated image with segmentation mask"] = f"/images/{name}.jpg"
- result["predicted"] = predicted
-
- if task == "object-detection":
- img_url = data["image"]
- img_data = image_to_bytes(img_url)
- predicted = inference(data=img_data)
- image = Image.open(BytesIO(img_data))
- draw = ImageDraw.Draw(image)
- labels = list(item['label'] for item in predicted)
- color_map = {}
- for label in labels:
- if label not in color_map:
- color_map[label] = (random.randint(0, 255), random.randint(0, 100), random.randint(0, 255))
- for label in predicted:
- box = label["box"]
- draw.rectangle(((box["xmin"], box["ymin"]), (box["xmax"], box["ymax"])), outline=color_map[label["label"]], width=2)
- draw.text((box["xmin"]+5, box["ymin"]-15), label["label"], fill=color_map[label["label"]])
- name = str(uuid.uuid4())[:4]
- image.save(f"public/images/{name}.jpg")
- result = {}
- result["generated image with predicted box"] = f"/images/{name}.jpg"
- result["predicted"] = predicted
-
- if task in ["image-classification"]:
- img_url = data["image"]
- img_data = image_to_bytes(img_url)
- result = inference(data=img_data)
-
- if task == "image-to-text":
- img_url = data["image"]
- img_data = image_to_bytes(img_url)
- HUGGINGFACE_HEADERS["Content-Length"] = str(len(img_data))
- r = requests.post(task_url, headers=HUGGINGFACE_HEADERS, data=img_data)
- result = {}
- if "generated_text" in r.json()[0]:
- result["generated text"] = r.json()[0].pop("generated_text")
-
- # AUDIO tasks
- if task == "text-to-speech":
- inputs = data["text"]
- response = inference(inputs, raw_response=True)
- # response = requests.post(task_url, headers=HUGGINGFACE_HEADERS, json={"inputs": text})
- name = str(uuid.uuid4())[:4]
- with open(f"public/audios/{name}.flac", "wb") as f:
- f.write(response.content)
- result = {"generated audio": f"/audios/{name}.flac"}
- if task in ["automatic-speech-recognition", "audio-to-audio", "audio-classification"]:
- audio_url = data["audio"]
- audio_data = requests.get(audio_url, timeout=10).content
- response = inference(data=audio_data, raw_response=True)
- result = response.json()
- if task == "audio-to-audio":
- content = None
- type = None
- for k, v in result[0].items():
- if k == "blob":
- content = base64.b64decode(v.encode("utf-8"))
- if k == "content-type":
- type = "audio/flac".split("/")[-1]
- audio = AudioSegment.from_file(BytesIO(content))
- name = str(uuid.uuid4())[:4]
- audio.export(f"public/audios/{name}.{type}", format=type)
- result = {"generated audio": f"/audios/{name}.{type}"}
- return result
-
-def local_model_inference(model_id, data, task):
- inference = partial(models, model_id)
- # contronlet
- if model_id.startswith("lllyasviel/sd-controlnet-"):
- img_url = data["image"]
- text = data["text"]
- results = inference({"img_url": img_url, "text": text})
- if "path" in results:
- results["generated image"] = results.pop("path")
- return results
- if model_id.endswith("-control"):
- img_url = data["image"]
- results = inference({"img_url": img_url})
- if "path" in results:
- results["generated image"] = results.pop("path")
- return results
-
- if task == "text-to-video":
- results = inference(data)
- if "path" in results:
- results["generated video"] = results.pop("path")
- return results
-
- # NLP tasks
- if task == "question-answering" or task == "sentence-similarity":
- results = inference(json=data)
- return results
- if task in ["text-classification", "token-classification", "text2text-generation", "summarization", "translation", "conversational", "text-generation"]:
- results = inference(json=data)
- return results
-
- # CV tasks
- if task == "depth-estimation":
- img_url = data["image"]
- results = inference({"img_url": img_url})
- if "path" in results:
- results["generated depth image"] = results.pop("path")
- return results
- if task == "image-segmentation":
- img_url = data["image"]
- results = inference({"img_url": img_url})
- results["generated image with segmentation mask"] = results.pop("path")
- return results
- if task == "image-to-image":
- img_url = data["image"]
- results = inference({"img_url": img_url})
- if "path" in results:
- results["generated image"] = results.pop("path")
- return results
- if task == "text-to-image":
- results = inference(data)
- if "path" in results:
- results["generated image"] = results.pop("path")
- return results
- if task == "object-detection":
- img_url = data["image"]
- predicted = inference({"img_url": img_url})
- if "error" in predicted:
- return predicted
- image = load_image(img_url)
- draw = ImageDraw.Draw(image)
- labels = list(item['label'] for item in predicted)
- color_map = {}
- for label in labels:
- if label not in color_map:
- color_map[label] = (random.randint(0, 255), random.randint(0, 100), random.randint(0, 255))
- for label in predicted:
- box = label["box"]
- draw.rectangle(((box["xmin"], box["ymin"]), (box["xmax"], box["ymax"])), outline=color_map[label["label"]], width=2)
- draw.text((box["xmin"]+5, box["ymin"]-15), label["label"], fill=color_map[label["label"]])
- name = str(uuid.uuid4())[:4]
- image.save(f"public/images/{name}.jpg")
- results = {}
- results["generated image with predicted box"] = f"/images/{name}.jpg"
- results["predicted"] = predicted
- return results
- if task in ["image-classification", "image-to-text", "document-question-answering", "visual-question-answering"]:
- img_url = data["image"]
- text = None
- if "text" in data:
- text = data["text"]
- results = inference({"img_url": img_url, "text": text})
- return results
- # AUDIO tasks
- if task == "text-to-speech":
- results = inference(data)
- if "path" in results:
- results["generated audio"] = results.pop("path")
- return results
- if task in ["automatic-speech-recognition", "audio-to-audio", "audio-classification"]:
- audio_url = data["audio"]
- results = inference({"audio_url": audio_url})
- return results
-
-
-def model_inference(model_id, data, hosted_on, task, huggingfacetoken=None):
- if huggingfacetoken:
- HUGGINGFACE_HEADERS = {
- "Authorization": f"Bearer {huggingfacetoken}",
- }
- else:
- HUGGINGFACE_HEADERS = None
- if hosted_on == "unknown":
- r = status(model_id)
- logger.debug("Local Server Status: " + str(r))
- if "loaded" in r and r["loaded"]:
- hosted_on = "local"
- else:
- huggingfaceStatusUrl = f"https://api-inference.huggingface.co/status/{model_id}"
- r = requests.get(huggingfaceStatusUrl, headers=HUGGINGFACE_HEADERS, proxies=PROXY)
- logger.debug("Huggingface Status: " + str(r.json()))
- if "loaded" in r and r["loaded"]:
- hosted_on = "huggingface"
- try:
- if hosted_on == "local":
- inference_result = local_model_inference(model_id, data, task)
- elif hosted_on == "huggingface":
- inference_result = huggingface_model_inference(model_id, data, task, huggingfacetoken)
- except Exception as e:
- print(e)
- traceback.print_exc()
- inference_result = {"error":{"message": str(e)}}
- return inference_result
-
-
-def get_model_status(model_id, url, headers, queue = None):
- endpoint_type = "huggingface" if "huggingface" in url else "local"
- if "huggingface" in url:
- r = requests.get(url, headers=headers, proxies=PROXY)
- else:
- r = status(model_id)
- if "loaded" in r and r["loaded"]:
- if queue:
- queue.put((model_id, True, endpoint_type))
- return True
- else:
- if queue:
- queue.put((model_id, False, None))
- return False
-
-def get_avaliable_models(candidates, topk=10, huggingfacetoken = None):
- all_available_models = {"local": [], "huggingface": []}
- threads = []
- result_queue = Queue()
- HUGGINGFACE_HEADERS = {
- "Authorization": f"Bearer {huggingfacetoken}",
- }
- for candidate in candidates:
- model_id = candidate["id"]
-
- if inference_mode != "local":
- huggingfaceStatusUrl = f"https://api-inference.huggingface.co/status/{model_id}"
- thread = threading.Thread(target=get_model_status, args=(model_id, huggingfaceStatusUrl, HUGGINGFACE_HEADERS, result_queue))
- threads.append(thread)
- thread.start()
-
- if inference_mode != "huggingface" and config["local_deployment"] != "minimal":
- thread = threading.Thread(target=get_model_status, args=(model_id, "", {}, result_queue))
- threads.append(thread)
- thread.start()
-
- result_count = len(threads)
- while result_count:
- model_id, status, endpoint_type = result_queue.get()
- if status and model_id not in all_available_models:
- all_available_models[endpoint_type].append(model_id)
- if len(all_available_models["local"] + all_available_models["huggingface"]) >= topk:
- break
- result_count -= 1
-
- for thread in threads:
- thread.join()
-
- return all_available_models
-
-def collect_result(command, choose, inference_result):
- result = {"task": command}
- result["inference result"] = inference_result
- result["choose model result"] = choose
- logger.debug(f"inference result: {inference_result}")
- return result
-
-
-def run_task(input, command, results, openaikey = None, huggingfacetoken = None):
- id = command["id"]
- args = command["args"]
- task = command["task"]
- deps = command["dep"]
- if deps[0] != -1:
- dep_tasks = [results[dep] for dep in deps]
- else:
- dep_tasks = []
-
- logger.debug(f"Run task: {id} - {task}")
- logger.debug("Deps: " + json.dumps(dep_tasks))
-
- if deps[0] != -1:
- if "image" in args and "-" in args["image"]:
- resource_id = int(args["image"].split("-")[1])
- if "generated image" in results[resource_id]["inference result"]:
- args["image"] = results[resource_id]["inference result"]["generated image"]
- if "audio" in args and "-" in args["audio"]:
- resource_id = int(args["audio"].split("-")[1])
- if "generated audio" in results[resource_id]["inference result"]:
- args["audio"] = results[resource_id]["inference result"]["generated audio"]
- if "text" in args and "-" in args["text"]:
- resource_id = int(args["text"].split("-")[1])
- if "generated text" in results[resource_id]["inference result"]:
- args["text"] = results[resource_id]["inference result"]["generated text"]
-
- text = image = audio = None
- for dep_task in dep_tasks:
- if "generated text" in dep_task["inference result"]:
- text = dep_task["inference result"]["generated text"]
- logger.debug("Detect the generated text of dependency task (from results):" + text)
- elif "text" in dep_task["task"]["args"]:
- text = dep_task["task"]["args"]["text"]
- logger.debug("Detect the text of dependency task (from args): " + text)
- if "generated image" in dep_task["inference result"]:
- image = dep_task["inference result"]["generated image"]
- logger.debug("Detect the generated image of dependency task (from results): " + image)
- elif "image" in dep_task["task"]["args"]:
- image = dep_task["task"]["args"]["image"]
- logger.debug("Detect the image of dependency task (from args): " + image)
- if "generated audio" in dep_task["inference result"]:
- audio = dep_task["inference result"]["generated audio"]
- logger.debug("Detect the generated audio of dependency task (from results): " + audio)
- elif "audio" in dep_task["task"]["args"]:
- audio = dep_task["task"]["args"]["audio"]
- logger.debug("Detect the audio of dependency task (from args): " + audio)
-
- if "image" in args and "" in args["image"]:
- if image:
- args["image"] = image
- if "audio" in args and "" in args["audio"]:
- if audio:
- args["audio"] = audio
- if "text" in args and "" in args["text"]:
- if text:
- args["text"] = text
-
- for resource in ["image", "audio"]:
- if resource in args and not args[resource].startswith("public/") and len(args[resource]) > 0 and not args[resource].startswith("http"):
- args[resource] = f"public/{args[resource]}"
-
- if "-text-to-image" in command['task'] and "text" not in args:
- logger.debug("control-text-to-image task, but text is empty, so we use control-generation instead.")
- control = task.split("-")[0]
-
- if control == "seg":
- task = "image-segmentation"
- command['task'] = task
- elif control == "depth":
- task = "depth-estimation"
- command['task'] = task
- else:
- task = f"{control}-control"
-
- command["args"] = args
- logger.debug(f"parsed task: {command}")
-
- if task.endswith("-text-to-image") or task.endswith("-control"):
- if inference_mode != "huggingface":
- if task.endswith("-text-to-image"):
- control = task.split("-")[0]
- best_model_id = f"lllyasviel/sd-controlnet-{control}"
- else:
- best_model_id = task
- hosted_on = "local"
- reason = "ControlNet is the best model for this task."
- choose = {"id": best_model_id, "reason": reason}
- logger.debug(f"chosen model: {choose}")
- else:
- logger.warning(f"Task {command['task']} is not available. ControlNet need to be deployed locally.")
- record_case(success=False, **{"input": input, "task": command, "reason": f"Task {command['task']} is not available. ControlNet need to be deployed locally.", "op":"message"})
- inference_result = {"error": f"service related to ControlNet is not available."}
- results[id] = collect_result(command, "", inference_result)
- return False
- elif task in ["summarization", "translation", "conversational", "text-generation", "text2text-generation"]: # ChatGPT Can do
- best_model_id = "ChatGPT"
- reason = "ChatGPT performs well on some NLP tasks as well."
- choose = {"id": best_model_id, "reason": reason}
- messages = [{
- "role": "user",
- "content": f"[ {input} ] contains a task in JSON format {command}, 'task' indicates the task type and 'args' indicates the arguments required for the task. Don't explain the task to me, just help me do it and give me the result. The result must be in text form without any urls."
- }]
- response = chitchat(messages, openaikey)
- results[id] = collect_result(command, choose, {"response": response})
- return True
- else:
- if task not in MODELS_MAP:
- logger.warning(f"no available models on {task} task.")
- record_case(success=False, **{"input": input, "task": command, "reason": f"task not support: {command['task']}", "op":"message"})
- inference_result = {"error": f"{command['task']} not found in available tasks."}
- results[id] = collect_result(command, "", inference_result)
- return False
-
- candidates = MODELS_MAP[task][:20]
- all_avaliable_models = get_avaliable_models(candidates, config["num_candidate_models"], huggingfacetoken)
- all_avaliable_model_ids = all_avaliable_models["local"] + all_avaliable_models["huggingface"]
- logger.debug(f"avaliable models on {command['task']}: {all_avaliable_models}")
-
- if len(all_avaliable_model_ids) == 0:
- logger.warning(f"no available models on {command['task']}")
- record_case(success=False, **{"input": input, "task": command, "reason": f"no available models: {command['task']}", "op":"message"})
- inference_result = {"error": f"no available models on {command['task']} task."}
- results[id] = collect_result(command, "", inference_result)
- return False
-
- if len(all_avaliable_model_ids) == 1:
- best_model_id = all_avaliable_model_ids[0]
- hosted_on = "local" if best_model_id in all_avaliable_models["local"] else "huggingface"
- reason = "Only one model available."
- choose = {"id": best_model_id, "reason": reason}
- logger.debug(f"chosen model: {choose}")
- else:
- cand_models_info = [
- {
- "id": model["id"],
- "inference endpoint": all_avaliable_models.get(
- "local" if model["id"] in all_avaliable_models["local"] else "huggingface"
- ),
- "likes": model.get("likes"),
- "description": model.get("description", "")[:config["max_description_length"]],
- "language": model.get("language"),
- "tags": model.get("tags"),
- }
- for model in candidates
- if model["id"] in all_avaliable_model_ids
- ]
-
- choose_str = choose_model(input, command, cand_models_info, openaikey)
- logger.debug(f"chosen model: {choose_str}")
- try:
- choose = json.loads(choose_str)
- reason = choose["reason"]
- best_model_id = choose["id"]
- hosted_on = "local" if best_model_id in all_avaliable_models["local"] else "huggingface"
- except Exception as e:
- logger.warning(f"the response [ {choose_str} ] is not a valid JSON, try to find the model id and reason in the response.")
- choose_str = find_json(choose_str)
- best_model_id, reason, choose = get_id_reason(choose_str)
- hosted_on = "local" if best_model_id in all_avaliable_models["local"] else "huggingface"
- inference_result = model_inference(best_model_id, args, hosted_on, command['task'], huggingfacetoken)
-
- if "error" in inference_result:
- logger.warning(f"Inference error: {inference_result['error']}")
- record_case(success=False, **{"input": input, "task": command, "reason": f"inference error: {inference_result['error']}", "op":"message"})
- results[id] = collect_result(command, choose, inference_result)
- return False
-
- results[id] = collect_result(command, choose, inference_result)
- return True
-
-def chat_huggingface(messages, openaikey = None, huggingfacetoken = None, return_planning = False, return_results = False):
- start = time.time()
- context = messages[:-1]
- input = messages[-1]["content"]
- logger.info("*"*80)
- logger.info(f"input: {input}")
-
- task_str = parse_task(context, input, openaikey)
- logger.info(task_str)
-
- if "error" in task_str:
- return str(task_str), {}
- else:
- task_str = task_str.strip()
-
- try:
- tasks = json.loads(task_str)
- except Exception as e:
- logger.debug(e)
- response = chitchat(messages, openaikey)
- record_case(success=False, **{"input": input, "task": task_str, "reason": "task parsing fail", "op":"chitchat"})
- return response, {}
-
- if task_str == "[]": # using LLM response for empty task
- record_case(success=False, **{"input": input, "task": [], "reason": "task parsing fail: empty", "op": "chitchat"})
- response = chitchat(messages, openaikey)
- return response, {}
-
- if len(tasks)==1 and tasks[0]["task"] in ["summarization", "translation", "conversational", "text-generation", "text2text-generation"]:
- record_case(success=True, **{"input": input, "task": tasks, "reason": "task parsing fail: empty", "op": "chitchat"})
- response = chitchat(messages, openaikey)
- best_model_id = "ChatGPT"
- reason = "ChatGPT performs well on some NLP tasks as well."
- choose = {"id": best_model_id, "reason": reason}
- return response, collect_result(tasks[0], choose, {"response": response})
-
-
- tasks = unfold(tasks)
- tasks = fix_dep(tasks)
- logger.debug(tasks)
-
- if return_planning:
- return tasks
-
- results = {}
- threads = []
- tasks = tasks[:]
- d = dict()
- retry = 0
- while True:
- num_threads = len(threads)
- for task in tasks:
- dep = task["dep"]
- # logger.debug(f"d.keys(): {d.keys()}, dep: {dep}")
- for dep_id in dep:
- if dep_id >= task["id"]:
- task["dep"] = [-1]
- dep = [-1]
- break
- if len(list(set(dep).intersection(d.keys()))) == len(dep) or dep[0] == -1:
- tasks.remove(task)
- thread = threading.Thread(target=run_task, args=(input, task, d, openaikey, huggingfacetoken))
- thread.start()
- threads.append(thread)
- if num_threads == len(threads):
- time.sleep(0.5)
- retry += 1
- if retry > 160:
- logger.debug("User has waited too long, Loop break.")
- break
- if len(tasks) == 0:
- break
- for thread in threads:
- thread.join()
-
- results = d.copy()
-
- logger.debug(results)
- if return_results:
- return results
-
- response = response_results(input, results, openaikey).strip()
-
- end = time.time()
- during = end - start
-
- answer = {"message": response}
- record_case(success=True, **{"input": input, "task": task_str, "results": results, "response": response, "during": during, "op":"response"})
- logger.info(f"response: {response}")
- return response, results
\ No newline at end of file
diff --git a/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/README_ZH.md b/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/README_ZH.md
deleted file mode 100644
index 269546ccb91643bef62d872b39a90bf19a8393aa..0000000000000000000000000000000000000000
--- a/spaces/cccccch/VITS-fast-fine-tuning-DingZhen/README_ZH.md
+++ /dev/null
@@ -1,60 +0,0 @@
-English Documentation Please Click [here](https://github.com/Plachtaa/VITS-fast-fine-tuning/blob/main/README.md)
-# VITS 快速微调
-这个代码库会指导你如何将自定义角色(甚至你自己),加入预训练的VITS模型中,在1小时内的微调使模型具备如下功能:
-1. 在 模型所包含的任意两个角色 之间进行声线转换
-2. 以 你加入的角色声线 进行中日英三语 文本到语音合成。
-
-本项目使用的底模涵盖常见二次元男/女配音声线(来自原神数据集)以及现实世界常见男/女声线(来自VCTK数据集),支持中日英三语,保证能够在微调时快速适应新的声线。
-
-欢迎体验微调所使用的底模!
-
-中日英:[](https://huggingface.co/spaces/Plachta/VITS-Umamusume-voice-synthesizer) 作者:我
-
-中日:[](https://huggingface.co/spaces/sayashi/vits-uma-genshin-honkai) 作者:[SayaSS](https://github.com/SayaSS)
-
-### 目前支持的任务:
-- [x] 从 10条以上的短音频 克隆角色声音
-- [x] 从 3分钟以上的长音频(单个音频只能包含单说话人) 克隆角色声音
-- [x] 从 3分钟以上的视频(单个视频只能包含单说话人) 克隆角色声音
-- [x] 通过输入 bilibili视频链接(单个视频只能包含单说话人) 克隆角色声音
-
-### 目前支持声线转换和中日英三语TTS的角色
-- [x] 任意角色(只要你有角色的声音样本)
-(注意:声线转换只能在任意两个存在于模型中的说话人之间进行)
-
-
-
-
-## 微调
-建议使用 [Google Colab](https://colab.research.google.com/drive/1pn1xnFfdLK63gVXDwV4zCXfVeo8c-I-0?usp=sharing)
-进行微调任务,因为VITS在多语言情况下的某些环境依赖相当难以配置。
-### 在Google Colab里,我需要花多长时间?
-1. 安装依赖 (3 min)
-2. 选择预训练模型,详细区别参见[Colab 笔记本页面](https://colab.research.google.com/drive/1pn1xnFfdLK63gVXDwV4zCXfVeo8c-I-0?usp=sharing)。
-3. 上传你希望加入的其它角色声音,详细上传方式见[DATA.MD](https://github.com/Plachtaa/VITS-fast-fine-tuning/blob/main/DATA.MD)
-4. 进行微调,根据选择的微调方式和样本数量不同,花费时长可能在20分钟到2小时不等。
-
-微调结束后可以直接下载微调好的模型,日后在本地运行(不需要GPU)
-
-## 本地运行和推理
-0. 记得下载微调好的模型和config文件!
-1. 下载最新的Release包(在Github页面的右侧)
-2. 把下载的模型和config文件放在 `inference`文件夹下, 其文件名分别为 `G_latest.pth` 和 `finetune_speaker.json`。
-3. 一切准备就绪后,文件结构应该如下所示:
-```
-inference
-├───inference.exe
-├───...
-├───finetune_speaker.json
-└───G_latest.pth
-```
-4. 运行 `inference.exe`, 浏览器会自动弹出窗口, 注意其所在路径不能有中文字符或者空格.
-
-## 在MoeGoe使用
-0. MoeGoe以及类似其它VITS推理UI使用的config格式略有不同,需要下载的文件为模型`G_latest.pth`和配置文件`moegoe_config.json`
-1. 按照[MoeGoe](https://github.com/CjangCjengh/MoeGoe)页面的提示配置路径即可使用。
-2. MoeGoe在输入句子时需要使用相应的语言标记包裹句子才能正常合成。(日语用[JA], 中文用[ZH], 英文用[EN]),例如:
-[JA]こんにちわ。[JA]
-[ZH]你好![ZH]
-[EN]Hello![EN]
-
diff --git a/spaces/chaowei100/ChatGPT_Taiyi-Stable-Diffusion/app.py b/spaces/chaowei100/ChatGPT_Taiyi-Stable-Diffusion/app.py
deleted file mode 100644
index 8ddfd37f6c6d264befdc815a2d9bcc6894fddb44..0000000000000000000000000000000000000000
--- a/spaces/chaowei100/ChatGPT_Taiyi-Stable-Diffusion/app.py
+++ /dev/null
@@ -1,140 +0,0 @@
-import os; os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染
-import gradio as gr
-from predict import predict
-from funtional_picture import infer_text2img
-from toolbox import format_io, find_free_port, get_conf
-import numpy as np
-
-# 建议您复制一个config_private.py放自己的秘密, 如API和代理网址, 避免不小心传github被别人看到
-proxies, WEB_PORT, LLM_MODEL, CONCURRENT_COUNT, AUTHENTICATION, CHATBOT_HEIGHT = \
- get_conf('proxies', 'WEB_PORT', 'LLM_MODEL', 'CONCURRENT_COUNT', 'AUTHENTICATION', 'CHATBOT_HEIGHT')
-
-# 如果WEB_PORT是-1, 则随机选取WEB端口
-PORT = find_free_port() if WEB_PORT <= 0 else WEB_PORT
-if not AUTHENTICATION: AUTHENTICATION = None
-
-initial_prompt = "Serve me as a writing and programming assistant."
-title_html = "展示你的机器学习模型 "
-description = """"""
-
-# 问询记录, python 版本建议3.9+(越新越好)
-import logging
-os.makedirs("work_log", exist_ok=True)
-try:logging.basicConfig(filename="work_log/chat_secrets.log", level=logging.INFO, encoding="utf-8")
-except:logging.basicConfig(filename="gpt_log/chat_secrets.log", level=logging.INFO)
-print("所有问询记录将自动保存在本地目录./gpt_log/chat_secrets.log, 请注意自我隐私保护哦!")
-
-# 一些普通功能模块
-from functional import get_functionals
-functional = get_functionals()
-
-
-
-# 处理markdown文本格式的转变
-gr.Chatbot.postprocess = format_io
-
-# 做一些外观色彩上的调整
-from theme import adjust_theme, advanced_css
-set_theme = adjust_theme()
-
-cancel_handles = []
-with gr.Blocks(theme=set_theme, analytics_enabled=False, css=advanced_css) as demo:
- gr.HTML(title_html)
- with gr.Tab("ChatGPT"):
- with gr.Row().style(equal_height=True):
- with gr.Column(scale=2):
- chatbot = gr.Chatbot()
- chatbot.style(height=CHATBOT_HEIGHT/2)
- history = gr.State([])
- with gr.Row():
- txt = gr.Textbox(show_label=False, placeholder="Input question here.").style(container=False)
- with gr.Row():
- submitBtn = gr.Button("提交", variant="primary")
- with gr.Row():
- resetBtn = gr.Button("重置", variant="secondary");
- resetBtn.style(size="sm")
- stopBtn = gr.Button("停止", variant="secondary");
- stopBtn.style(size="sm")
-
- with gr.Column(scale=1):
- with gr.Row():
- from check_proxy import check_proxy
- status = gr.Markdown(f"Tip: 按Enter提交, 按Shift+Enter换行。当前模型: {LLM_MODEL} \n {check_proxy(proxies)}")
- with gr.Accordion("基础功能区", open=True) as area_basic_fn:
- with gr.Row():
- for k in functional:
- variant = functional[k]["Color"] if "Color" in functional[k] else "secondary"
- functional[k]["Button"] = gr.Button(k, variant=variant)
- with gr.Accordion("展开SysPrompt & 交互界面布局 & Github地址", open=True):
- system_prompt = gr.Textbox(show_label=True, placeholder=f"System Prompt", label="System prompt", value=initial_prompt)
- top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.01,interactive=True, label="Top-p (nucleus sampling)",)
- temperature = gr.Slider(minimum=-0, maximum=2.0, value=1.0, step=0.01, interactive=True, label="Temperature",)
- checkboxes = gr.CheckboxGroup(["基础功能区", "函数插件区"], value=["基础功能区", "函数插件区"], label="显示/隐藏功能区")
- gr.Markdown(description)
- with gr.Tab("AI绘画"):
- examples = [
- ["铁马冰河入梦来, 梦幻, 插画"],
- ["东临碣石, 以观沧海, 波涛汹涌, 插画"],
- ["孤帆远影碧空尽,惟见长江天际流,油画"],
- ["动漫化,帅气,插画"],
- ["女孩背影, 日落, 唯美插画"],
- ]
- with gr.Row():
- with gr.Column(scale=1, ):
- image_out = gr.Image(label='输出(output)')
- with gr.Column(scale=1, ):
- image_in = gr.Image(source='upload', elem_id="image_upload", type="pil", label="参考图(非必须)(ref)")
- prompt = gr.Textbox(label='提示词(prompt)')
- submit_btn = gr.Button("生成图像(Generate)")
- with gr.Row(scale=0.5):
- guide = gr.Slider(2, 15, value=7, step=0.1, label='文本引导强度(guidance scale)')
- steps = gr.Slider(10, 30, value=20, step=1, label='迭代次数(inference steps)')
- width = gr.Slider(384, 640, value=512, step=64, label='宽度(width)')
- height = gr.Slider(384, 640, value=512, step=64, label='高度(height)')
- strength = gr.Slider(0, 1.0, value=0.8, step=0.02, label='参考图改变程度(strength)')
- ex = gr.Examples(examples, fn=infer_text2img, inputs=[prompt, guide, steps, width, height],
- outputs=image_out)
-
- submit_btn.click(fn=infer_text2img, inputs=[prompt, guide, steps, width, height, image_in, strength],
- outputs=image_out)
-
- # demo.queue(concurrency_count=1, max_size=8).launch()
-
-
- # 功能区显示开关与功能区的互动
- def fn_area_visibility(a):
- ret = {}
- ret.update({area_basic_fn: gr.update(visible=("基础功能区" in a))})
- return ret
-
- checkboxes.select(fn_area_visibility, [checkboxes], [area_basic_fn])
- # 整理反复出现的控件句柄组合
- input_combo = [txt, top_p, temperature, chatbot, history, system_prompt]
- output_combo = [chatbot, history, status]
- predict_args = dict(fn=predict, inputs=input_combo, outputs=output_combo)
- empty_txt_args = dict(fn=lambda: "", inputs=[], outputs=[txt]) # 用于在提交后清空输入栏
- # 提交按钮、重置按钮
- cancel_handles.append(txt.submit(**predict_args)) #; txt.submit(**empty_txt_args) 在提交后清空输入栏
- cancel_handles.append(submitBtn.click(**predict_args)) #; submitBtn.click(**empty_txt_args) 在提交后清空输入栏
- resetBtn.click(lambda: ([], [], "已重置"), None, output_combo)
- # 基础功能区的回调函数注册
- for k in functional:
- click_handle = functional[k]["Button"].click(predict, [*input_combo, gr.State(True), gr.State(k)], output_combo)
- cancel_handles.append(click_handle)
- cancel_handles.append(click_handle)
- # 终止按钮的回调函数注册
- stopBtn.click(fn=None, inputs=None, outputs=None, cancels=cancel_handles)
-
-
-
-# gradio的inbrowser触发不太稳定,回滚代码到原始的浏览器打开函数
-def auto_opentab_delay():
- import threading, webbrowser, time
- print(f"如果浏览器没有自动打开,请复制并转到以下URL: http://localhost:{PORT}")
- def open():
- time.sleep(2)
- webbrowser.open_new_tab(f"http://localhost:{PORT}")
- threading.Thread(target=open, name="open-browser", daemon=True).start()
-auto_opentab_delay()
-demo.title = "展示你的机器学习模型"
-demo.queue(concurrency_count=CONCURRENT_COUNT).launch()
diff --git a/spaces/chasetank/manual_assistant/InnovationHub/llm/vector_store.py b/spaces/chasetank/manual_assistant/InnovationHub/llm/vector_store.py
deleted file mode 100644
index 35513a8c51f7f3d3b8351d143c71e88312d83df8..0000000000000000000000000000000000000000
--- a/spaces/chasetank/manual_assistant/InnovationHub/llm/vector_store.py
+++ /dev/null
@@ -1,179 +0,0 @@
-import plotly.graph_objs as go
-from sklearn.cluster import KMeans
-from sklearn.decomposition import PCA
-import plotly.express as px
-import numpy as np
-import os
-import pprint
-import codecs
-import chardet
-import gradio as gr
-from langchain.llms import HuggingFacePipeline
-from langchain.text_splitter import RecursiveCharacterTextSplitter
-from langchain.embeddings import HuggingFaceEmbeddings
-from langchain.vectorstores import FAISS
-from langchain import OpenAI, ConversationChain, LLMChain, PromptTemplate
-from langchain.memory import ConversationBufferWindowMemory
-
-
-def get_content(input_file):
- # Read the input file in binary mode
- with open(input_file, 'rb') as f:
- raw_data = f.read()
-
- # Detect the encoding of the file
- result = chardet.detect(raw_data)
- encoding = result['encoding']
-
- # Decode the contents using the detected encoding
- with codecs.open(input_file, 'r', encoding=encoding) as f:
- raw_text = f.read()
-
- # Return the content of the input file
- return raw_text
-
-
-def split_text(input_file, chunk_size=1000, chunk_overlap=0):
- text_splitter = RecursiveCharacterTextSplitter(
- chunk_size=chunk_size,
- chunk_overlap=chunk_overlap,
- length_function=len,
- )
-
- basename = os.path.basename(input_file)
- basename = os.path.splitext(basename)[0]
- raw_text = get_content(input_file=input_file)
-
- texts = text_splitter.split_text(text=raw_text)
- metadatas = [{"source": f"{basename}[{i}]"} for i in range(len(texts))]
- docs = text_splitter.create_documents(texts=texts, metadatas=metadatas)
-
- return texts, metadatas, docs
-
-
-def create_docs(input_file):
- # Create a text splitter object with a separator character
- text_splitter = RecursiveCharacterTextSplitter(
- chunk_size=1000,
- chunk_overlap=0,
- length_function=len,
- )
-
- basename = os.path.basename(input_file)
- basename = os.path.splitext(basename)[0]
- texts = get_content(input_file=input_file)
- metadatas = {'source': basename}
- docs = text_splitter.create_documents(texts=[texts], metadatas=[metadatas])
- return docs
-
-
-def get_similar_docs(query, index, k=5):
- similar_docs = index.similarity_search(query=query, k=k)
- result = [(d.summary, d.metadata) for d in similar_docs]
- return result
-
-
-def convert_to_html(similar_docs):
- result = []
- for summary, metadata in similar_docs:
- record = '' + summary + ' ' + \
- metadata['source'] + ' '
- result.append(record)
- html = 'Page Content Source ' + \
- '\n'.join(result) + '
'
- return html
-
-
-def create_similarity_plot(embeddings, labels, query, n_clusters=3):
- # Only include embeddings that have corresponding labels
- embeddings_with_labels = [
- embedding for i, embedding in enumerate(embeddings) if i < len(labels)]
-
- # Reduce the dimensionality of the embeddings using PCA
- pca = PCA(n_components=3)
- pca_embeddings = pca.fit_transform(embeddings_with_labels)
-
- # Cluster the embeddings using k-means
- kmeans = KMeans(n_clusters=n_clusters)
- kmeans.fit(embeddings_with_labels)
-
- # Create a trace for the query point
- query_trace = go.Scatter3d(
- x=[pca_embeddings[-1, 0]],
- y=[pca_embeddings[-1, 1]],
- z=[pca_embeddings[-1, 2]],
- mode='markers',
- marker=dict(
- color='black',
- symbol='diamond',
- size=10
- ),
- name=f"Query: '{query}'"
- )
-
- # Create a trace for the other points
- points_trace = go.Scatter3d(
- x=pca_embeddings[:, 0],
- y=pca_embeddings[:, 1],
- z=pca_embeddings[:, 2],
- mode='markers',
- marker=dict(
- color=kmeans.labels_,
- colorscale=px.colors.qualitative.Alphabet,
- size=5
- ),
- text=labels,
- name='Points'
- )
-
- # Create the figure
- fig = go.Figure(data=[query_trace, points_trace])
-
- # Add a title and legend
- fig.update_layout(
- title="3D Similarity Plot",
- legend_title_text="Cluster"
- )
-
- # Show the plot
- fig.show()
-
-
-def plot_similarities(query, index, embeddings=HuggingFaceEmbeddings(), k=5):
- query_embeddings = embeddings.embed_query(text=query)
-
- similar_docs = get_similar_docs(query=query, index=index, k=k)
- texts = []
- for d in similar_docs:
- texts.append(d[0])
-
- embeddings_array = embeddings.embed_documents(texts=texts)
-
- # Get the index of the query point
- query_index = len(embeddings_array) - 1
-
- create_similarity_plot(
- embeddings=embeddings_array,
- labels=texts,
- query_index=query_index,
- n_clusters=3
- )
-
-
-def start_ui(index):
- def query_index(query):
- similar_docs = get_similar_docs(query=query, index=index)
- formatted_output = convert_to_html(similar_docs=similar_docs)
- return formatted_output
-
- # Define input and output types
- input = gr.inputs.Textbox(lines=2)
- output = gr.outputs.HTML()
-
- # Create interface object
- iface = gr.Interface(fn=query_index,
- inputs=input,
- outputs=output)
-
- # Launch interface
- iface.launch()
\ No newline at end of file
diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/bert-loses-patience/pabee/__init__.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/bert-loses-patience/pabee/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/rag/callbacks_rag.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/rag/callbacks_rag.py
deleted file mode 100644
index d75f97995bd16f75396f1c32392d6b65137b8169..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/research_projects/rag/callbacks_rag.py
+++ /dev/null
@@ -1,116 +0,0 @@
-import logging
-from pathlib import Path
-
-import numpy as np
-import pytorch_lightning as pl
-import torch
-from pytorch_lightning.callbacks import EarlyStopping, ModelCheckpoint
-from pytorch_lightning.utilities import rank_zero_only
-from utils_rag import save_json
-
-
-def count_trainable_parameters(model):
- model_parameters = filter(lambda p: p.requires_grad, model.parameters())
- params = sum([np.prod(p.size()) for p in model_parameters])
- return params
-
-
-logger = logging.getLogger(__name__)
-
-
-def get_checkpoint_callback(output_dir, metric):
- """Saves the best model by validation EM score."""
- if metric == "rouge2":
- exp = "{val_avg_rouge2:.4f}-{step_count}"
- elif metric == "bleu":
- exp = "{val_avg_bleu:.4f}-{step_count}"
- elif metric == "em":
- exp = "{val_avg_em:.4f}-{step_count}"
- else:
- raise NotImplementedError(
- f"seq2seq callbacks only support rouge2 and bleu, got {metric}, You can make your own by adding to this"
- " function."
- )
-
- checkpoint_callback = ModelCheckpoint(
- dirpath=output_dir,
- filename=exp,
- monitor=f"val_{metric}",
- mode="max",
- save_top_k=3,
- every_n_epochs=1, # maybe save a checkpoint every time val is run, not just end of epoch.
- )
- return checkpoint_callback
-
-
-def get_early_stopping_callback(metric, patience):
- return EarlyStopping(
- monitor=f"val_{metric}", # does this need avg?
- mode="min" if "loss" in metric else "max",
- patience=patience,
- verbose=True,
- )
-
-
-class Seq2SeqLoggingCallback(pl.Callback):
- def on_batch_end(self, trainer, pl_module):
- lrs = {f"lr_group_{i}": param["lr"] for i, param in enumerate(pl_module.trainer.optimizers[0].param_groups)}
- pl_module.logger.log_metrics(lrs)
-
- @rank_zero_only
- def _write_logs(
- self, trainer: pl.Trainer, pl_module: pl.LightningModule, type_path: str, save_generations=True
- ) -> None:
- logger.info(f"***** {type_path} results at step {trainer.global_step:05d} *****")
- metrics = trainer.callback_metrics
- trainer.logger.log_metrics({k: v for k, v in metrics.items() if k not in ["log", "progress_bar", "preds"]})
- # Log results
- od = Path(pl_module.hparams.output_dir)
- if type_path == "test":
- results_file = od / "test_results.txt"
- generations_file = od / "test_generations.txt"
- else:
- # this never gets hit. I prefer not to save intermediate generations, and results are in metrics.json
- # If people want this it will be easy enough to add back.
- results_file = od / f"{type_path}_results/{trainer.global_step:05d}.txt"
- generations_file = od / f"{type_path}_generations/{trainer.global_step:05d}.txt"
- results_file.parent.mkdir(exist_ok=True)
- generations_file.parent.mkdir(exist_ok=True)
- with open(results_file, "a+") as writer:
- for key in sorted(metrics):
- if key in ["log", "progress_bar", "preds"]:
- continue
- val = metrics[key]
- if isinstance(val, torch.Tensor):
- val = val.item()
- msg = f"{key}: {val:.6f}\n"
- writer.write(msg)
-
- if not save_generations:
- return
-
- if "preds" in metrics:
- content = "\n".join(metrics["preds"])
- generations_file.open("w+").write(content)
-
- @rank_zero_only
- def on_train_start(self, trainer, pl_module):
- try:
- npars = pl_module.model.model.num_parameters()
- except AttributeError:
- npars = pl_module.model.num_parameters()
-
- n_trainable_pars = count_trainable_parameters(pl_module)
- # mp stands for million parameters
- trainer.logger.log_metrics({"n_params": npars, "mp": npars / 1e6, "grad_mp": n_trainable_pars / 1e6})
-
- @rank_zero_only
- def on_test_end(self, trainer: pl.Trainer, pl_module: pl.LightningModule):
- save_json(pl_module.metrics, pl_module.metrics_save_path)
- return self._write_logs(trainer, pl_module, "test")
-
- @rank_zero_only
- def on_validation_end(self, trainer: pl.Trainer, pl_module):
- save_json(pl_module.metrics, pl_module.metrics_save_path)
- # Uncommenting this will save val generations
- # return self._write_logs(trainer, pl_module, "valid")
diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/rag/utils_rag.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/rag/utils_rag.py
deleted file mode 100644
index ec98c1d782e0ea2a00d80420c88702acdd8da98d..0000000000000000000000000000000000000000
--- a/spaces/chendl/compositional_test/transformers/examples/research_projects/rag/utils_rag.py
+++ /dev/null
@@ -1,244 +0,0 @@
-import itertools
-import json
-import linecache
-import os
-import pickle
-import re
-import socket
-import string
-from collections import Counter
-from logging import getLogger
-from pathlib import Path
-from typing import Callable, Dict, Iterable, List
-
-import git
-import torch
-from torch.utils.data import Dataset
-
-from transformers import BartTokenizer, RagTokenizer, T5Tokenizer
-
-
-def encode_line(tokenizer, line, max_length, padding_side, pad_to_max_length=True, return_tensors="pt"):
- extra_kw = {"add_prefix_space": True} if isinstance(tokenizer, BartTokenizer) and not line.startswith(" ") else {}
- tokenizer.padding_side = padding_side
- return tokenizer(
- [line],
- max_length=max_length,
- padding="max_length" if pad_to_max_length else None,
- truncation=True,
- return_tensors=return_tensors,
- add_special_tokens=True,
- **extra_kw,
- )
-
-
-def trim_batch(
- input_ids,
- pad_token_id,
- attention_mask=None,
-):
- """Remove columns that are populated exclusively by pad_token_id"""
- keep_column_mask = input_ids.ne(pad_token_id).any(dim=0)
- if attention_mask is None:
- return input_ids[:, keep_column_mask]
- else:
- return (input_ids[:, keep_column_mask], attention_mask[:, keep_column_mask])
-
-
-class Seq2SeqDataset(Dataset):
- def __init__(
- self,
- tokenizer,
- data_dir,
- max_source_length,
- max_target_length,
- type_path="train",
- n_obs=None,
- src_lang=None,
- tgt_lang=None,
- prefix="",
- ):
- super().__init__()
- self.src_file = Path(data_dir).joinpath(type_path + ".source")
- self.tgt_file = Path(data_dir).joinpath(type_path + ".target")
- self.src_lens = self.get_char_lens(self.src_file)
- self.max_source_length = max_source_length
- self.max_target_length = max_target_length
- assert min(self.src_lens) > 0, f"found empty line in {self.src_file}"
- self.tokenizer = tokenizer
- self.prefix = prefix
- if n_obs is not None:
- self.src_lens = self.src_lens[:n_obs]
- self.src_lang = src_lang
- self.tgt_lang = tgt_lang
-
- def __len__(self):
- return len(self.src_lens)
-
- def __getitem__(self, index) -> Dict[str, torch.Tensor]:
- index = index + 1 # linecache starts at 1
- source_line = self.prefix + linecache.getline(str(self.src_file), index).rstrip("\n")
- tgt_line = linecache.getline(str(self.tgt_file), index).rstrip("\n")
- assert source_line, f"empty source line for index {index}"
- assert tgt_line, f"empty tgt line for index {index}"
-
- # Need to add eos token manually for T5
- if isinstance(self.tokenizer, T5Tokenizer):
- source_line += self.tokenizer.eos_token
- tgt_line += self.tokenizer.eos_token
-
- # Pad source and target to the right
- source_tokenizer = (
- self.tokenizer.question_encoder if isinstance(self.tokenizer, RagTokenizer) else self.tokenizer
- )
- target_tokenizer = self.tokenizer.generator if isinstance(self.tokenizer, RagTokenizer) else self.tokenizer
-
- source_inputs = encode_line(source_tokenizer, source_line, self.max_source_length, "right")
- target_inputs = encode_line(target_tokenizer, tgt_line, self.max_target_length, "right")
-
- source_ids = source_inputs["input_ids"].squeeze()
- target_ids = target_inputs["input_ids"].squeeze()
- src_mask = source_inputs["attention_mask"].squeeze()
- return {
- "input_ids": source_ids,
- "attention_mask": src_mask,
- "decoder_input_ids": target_ids,
- }
-
- @staticmethod
- def get_char_lens(data_file):
- return [len(x) for x in Path(data_file).open().readlines()]
-
- def collate_fn(self, batch) -> Dict[str, torch.Tensor]:
- input_ids = torch.stack([x["input_ids"] for x in batch])
- masks = torch.stack([x["attention_mask"] for x in batch])
- target_ids = torch.stack([x["decoder_input_ids"] for x in batch])
- tgt_pad_token_id = (
- self.tokenizer.generator.pad_token_id
- if isinstance(self.tokenizer, RagTokenizer)
- else self.tokenizer.pad_token_id
- )
- src_pad_token_id = (
- self.tokenizer.question_encoder.pad_token_id
- if isinstance(self.tokenizer, RagTokenizer)
- else self.tokenizer.pad_token_id
- )
- y = trim_batch(target_ids, tgt_pad_token_id)
- source_ids, source_mask = trim_batch(input_ids, src_pad_token_id, attention_mask=masks)
- batch = {
- "input_ids": source_ids,
- "attention_mask": source_mask,
- "decoder_input_ids": y,
- }
- return batch
-
-
-logger = getLogger(__name__)
-
-
-def flatten_list(summary_ids: List[List]):
- return list(itertools.chain.from_iterable(summary_ids))
-
-
-def save_git_info(folder_path: str) -> None:
- """Save git information to output_dir/git_log.json"""
- repo_infos = get_git_info()
- save_json(repo_infos, os.path.join(folder_path, "git_log.json"))
-
-
-def save_json(content, path, indent=4, **json_dump_kwargs):
- with open(path, "w") as f:
- json.dump(content, f, indent=indent, **json_dump_kwargs)
-
-
-def load_json(path):
- with open(path) as f:
- return json.load(f)
-
-
-def get_git_info():
- repo = git.Repo(search_parent_directories=True)
- repo_infos = {
- "repo_id": str(repo),
- "repo_sha": str(repo.head.object.hexsha),
- "repo_branch": str(repo.active_branch),
- "hostname": str(socket.gethostname()),
- }
- return repo_infos
-
-
-def lmap(f: Callable, x: Iterable) -> List:
- """list(map(f, x))"""
- return list(map(f, x))
-
-
-def pickle_save(obj, path):
- """pickle.dump(obj, path)"""
- with open(path, "wb") as f:
- return pickle.dump(obj, f)
-
-
-def normalize_answer(s):
- """Lower text and remove punctuation, articles and extra whitespace."""
-
- def remove_articles(text):
- return re.sub(r"\b(a|an|the)\b", " ", text)
-
- def white_space_fix(text):
- return " ".join(text.split())
-
- def remove_punc(text):
- exclude = set(string.punctuation)
- return "".join(ch for ch in text if ch not in exclude)
-
- def lower(text):
- return text.lower()
-
- return white_space_fix(remove_articles(remove_punc(lower(s))))
-
-
-def f1_score(prediction, ground_truth):
- prediction_tokens = normalize_answer(prediction).split()
- ground_truth_tokens = normalize_answer(ground_truth).split()
- common = Counter(prediction_tokens) & Counter(ground_truth_tokens)
- num_same = sum(common.values())
- if num_same == 0:
- return 0
- precision = 1.0 * num_same / len(prediction_tokens)
- recall = 1.0 * num_same / len(ground_truth_tokens)
- f1 = (2 * precision * recall) / (precision + recall)
- return f1
-
-
-def exact_match_score(prediction, ground_truth):
- return normalize_answer(prediction) == normalize_answer(ground_truth)
-
-
-def calculate_exact_match(output_lns: List[str], reference_lns: List[str]) -> Dict:
- assert len(output_lns) == len(reference_lns)
- em = 0
- for hypo, pred in zip(output_lns, reference_lns):
- em += exact_match_score(hypo, pred)
- if len(output_lns) > 0:
- em /= len(output_lns)
- return {"em": em}
-
-
-def is_rag_model(model_prefix):
- return model_prefix.startswith("rag")
-
-
-def set_extra_model_params(extra_params, hparams, config):
- equivalent_param = {p: p for p in extra_params}
- # T5 models don't have `dropout` param, they have `dropout_rate` instead
- equivalent_param["dropout"] = "dropout_rate"
- for p in extra_params:
- if getattr(hparams, p, None):
- if not hasattr(config, p) and not hasattr(config, equivalent_param[p]):
- logger.info("config doesn't have a `{}` attribute".format(p))
- delattr(hparams, p)
- continue
- set_p = p if hasattr(config, p) else equivalent_param[p]
- setattr(config, set_p, getattr(hparams, p))
- delattr(hparams, p)
- return hparams, config
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/XpmImagePlugin.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/XpmImagePlugin.py
deleted file mode 100644
index 5d5bdc3edfa7be8d235fd6ef4176cc6cebee541c..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/XpmImagePlugin.py
+++ /dev/null
@@ -1,128 +0,0 @@
-#
-# The Python Imaging Library.
-# $Id$
-#
-# XPM File handling
-#
-# History:
-# 1996-12-29 fl Created
-# 2001-02-17 fl Use 're' instead of 'regex' (Python 2.1) (0.7)
-#
-# Copyright (c) Secret Labs AB 1997-2001.
-# Copyright (c) Fredrik Lundh 1996-2001.
-#
-# See the README file for information on usage and redistribution.
-#
-
-
-import re
-
-from . import Image, ImageFile, ImagePalette
-from ._binary import o8
-
-# XPM header
-xpm_head = re.compile(b'"([0-9]*) ([0-9]*) ([0-9]*) ([0-9]*)')
-
-
-def _accept(prefix):
- return prefix[:9] == b"/* XPM */"
-
-
-##
-# Image plugin for X11 pixel maps.
-
-
-class XpmImageFile(ImageFile.ImageFile):
- format = "XPM"
- format_description = "X11 Pixel Map"
-
- def _open(self):
- if not _accept(self.fp.read(9)):
- msg = "not an XPM file"
- raise SyntaxError(msg)
-
- # skip forward to next string
- while True:
- s = self.fp.readline()
- if not s:
- msg = "broken XPM file"
- raise SyntaxError(msg)
- m = xpm_head.match(s)
- if m:
- break
-
- self._size = int(m.group(1)), int(m.group(2))
-
- pal = int(m.group(3))
- bpp = int(m.group(4))
-
- if pal > 256 or bpp != 1:
- msg = "cannot read this XPM file"
- raise ValueError(msg)
-
- #
- # load palette description
-
- palette = [b"\0\0\0"] * 256
-
- for _ in range(pal):
- s = self.fp.readline()
- if s[-2:] == b"\r\n":
- s = s[:-2]
- elif s[-1:] in b"\r\n":
- s = s[:-1]
-
- c = s[1]
- s = s[2:-2].split()
-
- for i in range(0, len(s), 2):
- if s[i] == b"c":
- # process colour key
- rgb = s[i + 1]
- if rgb == b"None":
- self.info["transparency"] = c
- elif rgb[:1] == b"#":
- # FIXME: handle colour names (see ImagePalette.py)
- rgb = int(rgb[1:], 16)
- palette[c] = (
- o8((rgb >> 16) & 255) + o8((rgb >> 8) & 255) + o8(rgb & 255)
- )
- else:
- # unknown colour
- msg = "cannot read this XPM file"
- raise ValueError(msg)
- break
-
- else:
- # missing colour key
- msg = "cannot read this XPM file"
- raise ValueError(msg)
-
- self.mode = "P"
- self.palette = ImagePalette.raw("RGB", b"".join(palette))
-
- self.tile = [("raw", (0, 0) + self.size, self.fp.tell(), ("P", 0, 1))]
-
- def load_read(self, bytes):
- #
- # load all image data in one chunk
-
- xsize, ysize = self.size
-
- s = [None] * ysize
-
- for i in range(ysize):
- s[i] = self.fp.readline()[1 : xsize + 1].ljust(xsize)
-
- return b"".join(s)
-
-
-#
-# Registry
-
-
-Image.register_open(XpmImageFile.format, XpmImageFile, _accept)
-
-Image.register_extension(XpmImageFile.format, ".xpm")
-
-Image.register_mime(XpmImageFile.format, "image/xpm")
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/insert.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/insert.py
deleted file mode 100644
index da606402f49748735fb1b8c7814e160b91b15ba0..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/clickhouse_connect/driver/insert.py
+++ /dev/null
@@ -1,200 +0,0 @@
-import logging
-from math import log
-from typing import Iterable, Sequence, Optional, Any, Dict, NamedTuple, Generator, Union, TYPE_CHECKING
-
-from clickhouse_connect.driver.query import quote_identifier
-
-from clickhouse_connect.driver.ctypes import data_conv
-from clickhouse_connect.driver.context import BaseQueryContext
-from clickhouse_connect.driver.options import np, pd
-from clickhouse_connect.driver.exceptions import ProgrammingError
-
-if TYPE_CHECKING:
- from clickhouse_connect.datatypes.base import ClickHouseType
-
-logger = logging.getLogger(__name__)
-DEFAULT_BLOCK_BYTES = 1 << 24 # Try to generate blocks between 16 and 32MB in raw size
-
-
-class InsertBlock(NamedTuple):
- prefix: bytes
- column_count: int
- row_count: int
- column_names: Iterable[str]
- column_types: Iterable['ClickHouseType']
- column_data: Iterable[Sequence[Any]]
-
-
-# pylint: disable=too-many-instance-attributes
-class InsertContext(BaseQueryContext):
- """
- Reusable Argument/parameter object for inserts.
- """
-
- # pylint: disable=too-many-arguments
- def __init__(self,
- table: str,
- column_names: Sequence[str],
- column_types: Sequence['ClickHouseType'],
- data: Any = None,
- column_oriented: Optional[bool] = None,
- settings: Optional[Dict[str, Any]] = None,
- compression: Optional[Union[str, bool]] = None,
- query_formats: Optional[Dict[str, str]] = None,
- column_formats: Optional[Dict[str, Union[str, Dict[str, str]]]] = None,
- block_size: Optional[int] = None):
- super().__init__(settings, query_formats, column_formats)
- self.table = table
- self.column_names = column_names
- self.column_types = column_types
- self.column_oriented = False if column_oriented is None else column_oriented
- self.compression = compression
- self.req_block_size = block_size
- self.block_size = DEFAULT_BLOCK_BYTES
- self.data = data
- self.insert_exception = None
-
- @property
- def empty(self) -> bool:
- return self._data is None
-
- @property
- def data(self):
- return self._raw_data
-
- @data.setter
- def data(self, data: Any):
- self._raw_data = data
- self.current_block = 0
- self.current_row = 0
- self.row_count = 0
- self.column_count = 0
- self._data = None
- if data is None or len(data) == 0:
- return
- if pd and isinstance(data, pd.DataFrame):
- data = self._convert_pandas(data)
- self.column_oriented = True
- if np and isinstance(data, np.ndarray):
- data = self._convert_numpy(data)
- if self.column_oriented:
- self._next_block_data = self._column_block_data
- self._block_columns = data # [SliceView(column) for column in data]
- self._block_rows = None
- self.column_count = len(data)
- self.row_count = len(data[0])
- else:
- self._next_block_data = self._row_block_data
- self._block_rows = data
- self._block_columns = None
- self.row_count = len(data)
- self.column_count = len(data[0])
- if self.row_count and self.column_count:
- if self.column_count != len(self.column_names):
- raise ProgrammingError('Insert data column count does not match column names')
- self._data = data
- self.block_size = self._calc_block_size()
-
- def _calc_block_size(self) -> int:
- if self.req_block_size:
- return self.req_block_size
- row_size = 0
- sample_size = min((log(self.row_count) + 1) * 2, 64)
- sample_freq = max(1, int(self.row_count / sample_size))
- for i, d_type in enumerate(self.column_types):
- if d_type.byte_size:
- row_size += d_type.byte_size
- continue
- if self.column_oriented:
- col_data = self._data[i]
- if sample_freq == 1:
- d_size = d_type.data_size(col_data)
- else:
- sample = [col_data[j] for j in range(0, self.row_count, sample_freq)]
- d_size = d_type.data_size(sample)
- else:
- data = self._data
- sample = [data[j][i] for j in range(0, self.row_count, sample_freq)]
- d_size = d_type.data_size(sample)
- row_size += d_size
- return 1 << (24 - int(log(row_size, 2)))
-
- def next_block(self) -> Generator[InsertBlock, None, None]:
- while True:
- block_end = min(self.current_row + self.block_size, self.row_count)
- row_count = block_end - self.current_row
- if row_count <= 0:
- return
- if self.current_block == 0:
- cols = f" ({', '.join([quote_identifier(x) for x in self.column_names])})"
- prefix = f'INSERT INTO {self.table}{cols} FORMAT Native\n'.encode()
- else:
- prefix = bytes()
- self.current_block += 1
- data = self._next_block_data(self.current_row, block_end)
- yield InsertBlock(prefix, self.column_count, row_count, self.column_names, self.column_types, data)
- self.current_row = block_end
-
- def _column_block_data(self, block_start, block_end):
- if block_start == 0 and self.row_count <= block_end:
- return self._block_columns # Optimization if we don't need to break up the block
- return [col[block_start: block_end] for col in self._block_columns]
-
- def _row_block_data(self, block_start, block_end):
- return data_conv.pivot(self._block_rows, block_start, block_end)
-
- def _convert_pandas(self, df):
- data = []
- for df_col_name, col_name, ch_type in zip(df.columns, self.column_names, self.column_types):
- df_col = df[df_col_name]
- d_type = str(df_col.dtype)
- if ch_type.python_type == int:
- if 'float' in d_type:
- df_col = df_col.round().astype(ch_type.base_type, copy=False)
- else:
- df_col = df_col.astype(ch_type.base_type, copy=False)
- elif 'datetime' in ch_type.np_type and (pd.core.dtypes.common.is_datetime_or_timedelta_dtype(df_col)
- or 'datetime64[ns' in d_type):
- div = ch_type.nano_divisor
- data.append([None if pd.isnull(x) else x.value // div for x in df_col])
- self.column_formats[col_name] = 'int'
- continue
- if ch_type.nullable:
- if d_type == 'object':
- # This is ugly, but the multiple replaces seem required as a result of this bug:
- # https://github.com/pandas-dev/pandas/issues/29024
- df_col = df_col.replace({pd.NaT: None}).replace({np.nan: None})
- elif 'Float' in ch_type.base_type:
- # This seems to be the only way to convert any null looking things to nan
- df_col = df_col.astype(ch_type.np_type)
- else:
- df_col = df_col.replace({np.nan: None})
- data.append(df_col.to_numpy(copy=False))
- return data
-
- def _convert_numpy(self, np_array):
- if np_array.dtype.names is None:
- if 'date' in str(np_array.dtype):
- for col_name, col_type in zip(self.column_names, self.column_types):
- if 'date' in col_type.np_type:
- self.column_formats[col_name] = 'int'
- return np_array.astype('int').tolist()
- for col_type in self.column_types:
- if col_type.byte_size == 0 or col_type.byte_size > np_array.dtype.itemsize:
- return np_array.tolist()
- return np_array
-
- if set(self.column_names).issubset(set(np_array.dtype.names)):
- data = [np_array[col_name] for col_name in self.column_names]
- else:
- # Column names don't match, so we have to assume they are in order
- data = [np_array[col_name] for col_name in np_array.dtype.names]
- for ix, (col_name, col_type) in enumerate(zip(self.column_names, self.column_types)):
- d_type = data[ix].dtype
- if 'date' in str(d_type) and 'date' in col_type.np_type:
- self.column_formats[col_name] = 'int'
- data[ix] = data[ix].astype(int).tolist()
- elif col_type.byte_size == 0 or col_type.byte_size > d_type.itemsize:
- data[ix] = data[ix].tolist()
- self.column_oriented = True
- return data
diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dotenv/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dotenv/__init__.py
deleted file mode 100644
index 7f4c631ba11786bceebd22591f91bd378d8b232c..0000000000000000000000000000000000000000
--- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dotenv/__init__.py
+++ /dev/null
@@ -1,49 +0,0 @@
-from typing import Any, Optional
-
-from .main import (dotenv_values, find_dotenv, get_key, load_dotenv, set_key,
- unset_key)
-
-
-def load_ipython_extension(ipython: Any) -> None:
- from .ipython import load_ipython_extension
- load_ipython_extension(ipython)
-
-
-def get_cli_string(
- path: Optional[str] = None,
- action: Optional[str] = None,
- key: Optional[str] = None,
- value: Optional[str] = None,
- quote: Optional[str] = None,
-):
- """Returns a string suitable for running as a shell script.
-
- Useful for converting a arguments passed to a fabric task
- to be passed to a `local` or `run` command.
- """
- command = ['dotenv']
- if quote:
- command.append(f'-q {quote}')
- if path:
- command.append(f'-f {path}')
- if action:
- command.append(action)
- if key:
- command.append(key)
- if value:
- if ' ' in value:
- command.append(f'"{value}"')
- else:
- command.append(value)
-
- return ' '.join(command).strip()
-
-
-__all__ = ['get_cli_string',
- 'load_dotenv',
- 'dotenv_values',
- 'get_key',
- 'set_key',
- 'unset_key',
- 'find_dotenv',
- 'load_ipython_extension']
diff --git a/spaces/cihyFjudo/fairness-paper-search/4YoGirlAnd8YoSuckLookPtscKleuterkutjeHussyfanYamad LINK.md b/spaces/cihyFjudo/fairness-paper-search/4YoGirlAnd8YoSuckLookPtscKleuterkutjeHussyfanYamad LINK.md
deleted file mode 100644
index 9e93ba50bb13144a7d9e652b5b6c818c00086f5a..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/4YoGirlAnd8YoSuckLookPtscKleuterkutjeHussyfanYamad LINK.md
+++ /dev/null
@@ -1,6 +0,0 @@
-4YoGirlAnd8YoSuckLookPtscKleuterkutjeHussyfanYamad DOWNLOAD ► https://tinurli.com/2uwjtU
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/6 Best Malayalam gangster movies of all times - OTTplay[2].md b/spaces/cihyFjudo/fairness-paper-search/6 Best Malayalam gangster movies of all times - OTTplay[2].md
deleted file mode 100644
index 93469cc3d2fa51e1f759789bf105636947feb778..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/6 Best Malayalam gangster movies of all times - OTTplay[2].md
+++ /dev/null
@@ -1,6 +0,0 @@
-Malayalam Ek Tha Gangster 720p premiere lista animo Download ✦✦✦ https://tinurli.com/2uwi8l
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/A Pdf Rename 4.0.0 Crack Why You Need This Software to Manage Your PDF Files.md b/spaces/cihyFjudo/fairness-paper-search/A Pdf Rename 4.0.0 Crack Why You Need This Software to Manage Your PDF Files.md
deleted file mode 100644
index c7d1ed384c3f5ea9339eb1a7b8254ca5e4ed87ab..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/A Pdf Rename 4.0.0 Crack Why You Need This Software to Manage Your PDF Files.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-Nested Symbols is something we tried, a long time ago, with Mockups 3. It turned out that the feature was just too much for our little Flex app, so we shelved it, knowing we would have to return when our rebuilt apps were ready. When we were planning features for 2021, Nested Symbols was the very obvious elephant in the room. When Peldi suggested we take another crack at it, it was kind of a shock for us. It had been the feature for so long, it became something of a white whale for us.
-A Pdf Rename 4.0.0 Crack Download Zip ››› https://tinurli.com/2uwhZO
-Do not transmit plain (unencrypted) data over the Internet. Such data isaccessible to everyone who has the time and ability to intercept it and useit for their own purposes. Instead, use an encrypted protocol such as SSL orSSH. MySQL supports internal SSL connections as of Version 4.0.0.SSH port-forwarding can be used to create an encrypted (and compressed)tunnel for the communication.
-When you connect to a MySQL server, you normally should use apassword. The password is not transmitted in clear text over theconnection, however the encryption algorithm is not very strong, andwith some effort a clever attacker can crack the password if he is ableto sniff the traffic between the client and the server. If theconnection between the client and the server goes through an untrustednetwork, you should use an SSH tunnel to encrypt thecommunication.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cihyFjudo/fairness-paper-search/Acoustica Mixcraft Pro Studio 8.0 Build 375 Incl Keygen-SadeemPC How to Create Professional Music Tracks on Your PC.md b/spaces/cihyFjudo/fairness-paper-search/Acoustica Mixcraft Pro Studio 8.0 Build 375 Incl Keygen-SadeemPC How to Create Professional Music Tracks on Your PC.md
deleted file mode 100644
index ee6055549f1260d43b01932045cde9b573040e70..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Acoustica Mixcraft Pro Studio 8.0 Build 375 Incl Keygen-SadeemPC How to Create Professional Music Tracks on Your PC.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Acoustica Mixcraft Pro Studio 8.0 Build 375 Incl Keygen-SadeemPC Download Pc Download » https://tinurli.com/2uwjM5
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/cihyFjudo/fairness-paper-search/Install Oracle 10g Release 2 Odac 64 Bit.md b/spaces/cihyFjudo/fairness-paper-search/Install Oracle 10g Release 2 Odac 64 Bit.md
deleted file mode 100644
index ce653199a4d5c436f3e47d604159afb61757647b..0000000000000000000000000000000000000000
--- a/spaces/cihyFjudo/fairness-paper-search/Install Oracle 10g Release 2 Odac 64 Bit.md
+++ /dev/null
@@ -1,16 +0,0 @@
-
-Step 1:Oracle install client (never succeeded with that, not recommended) or Oracle client (succeeded on Win7 ultimate 64bit, file win64_11gR1_client.zip, installed with "Runtime" option selected). After client install make sure you can connect. From command line try "tnsping yourtnanamesentry" to check if tnsnames is ok, and after that "sqlplus username/pwd@yourtnsnamesentry" to check if you know valid user and password and really can connect. Memorize or write down oracle home name and path you choosed during install.
-Install Oracle 10g Release 2 Odac 64 Bit Download ★★★★★ https://tinurli.com/2uwiuY
-I'm not sure if Step 1 is really needed or not, because maybe step3 really uses just oracle instant client.I know, it is real pain, but this works. It took me 2 days to connect to oracle, and I had to install almost 1GB of downloaded oracle software. They could and should make that much, much, much, much easier. Like one-click install that just works. This is shame how complicated client install is.
-Wonderful article. This has fixed my issue after i install and uninstall oracle 32bit and oracle 64bit clients. I am glad that we don't need the 32bit Oracle client after all (as in other blog indicated). I have apply this to SQL2012 x64 with Oracle 11gr2 x64.
-We are currently facing the problem with building 64 Bit SSIS pacakages(located on DB Server..which is 2003 Server Edition, 64 Bit) with connecting 32 bit oracle(loaded on the App Server..Windows 2003, 32 bit) in Business Intelligent Development Studio(Integration Services) with Sql Server 2005 64 Bit. We have installed 64 bit Oracle client and network tools and drivers on Database Server(DB Server), but we are still unable to connect from to Oracle from the BIDS for making the 64 bit SSIS packages.
-
-You can execute Oracle Migration Workbench from a 32-bit Windows environment to migrate third-party databases, as supported by release 9.2.0.2.1 or later, to an Oracle Database 10g Release 2 (10.2) database installed on a 64-bit Windows computer.
-They provide support for .NET Framework 2.0 and higher, and Microsoft Visual Studio 2005. These components were released after the original Oracle Database 10g Release 2 (10.2.0.1), that is the reason they require ODAC installation.
-The steps in the following two sections assume you've installed the ODAC 18.x files to the c:\oracle64 folder for 64-bit versions of Power BI Desktop, or to the c:\oracle32 folder for the 32-bit versions of Power BI Desktop. Follow these steps to register Unmanaged ODP.NET :
-Working on getting my oracle connections to work again. We were running SQL Server 2012 and using VS2010 and I had all my SSIS projects working just fine. Upgrading the SQL server to 2014 is require us to update VS to 2013 so we can keep working with SSIS. The problem is now SSIS no longer connects to Oracle. I know for the SQL 2012/VS2010 combination all i had to install with the ODTwithODBC for Oracle and optionally the AttunitySSISAdaptor and it was working wonderfully. Any help will be appreciated.
-I have installed ODAC 11.2.0.3 64 bit on local machine and i have oracle client 11.2.0.3 installed on my local machine moreover OraOLEDB.oracle installed on SQL server.After the installation SQL server also rebooted but i am not able to see oracle OLEDB provider under the SSDT_BI tools (SSIS) for OLEDB source. Thanks in advance
-I am facing same issue in my laptop 64 bit, I installed Visual studio 2015 and relevant SSDT set up but I cannot connect to oracle data base can some help me is resolving this by providing step by step by procedure
-Hi, I am trying to load data from oracle 11g to sql database. I am using VS 2010 to develop my SSIS package. I am not able to connect to oracle database. it seems that I am missing OLE DB provider for oracle. Can some one guide me what is the steps to download , install and configure the OLE DB provider for my SSIS package. There are so many sets of instruction you see on internet and difficult to decide which one is most accurate. Thanks in advance.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/altair/expr/consts.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/altair/expr/consts.py
deleted file mode 100644
index 974fb06a3c756a7e27106f4d1bb9c17b78a094fd..0000000000000000000000000000000000000000
--- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/altair/expr/consts.py
+++ /dev/null
@@ -1,29 +0,0 @@
-from typing import Dict
-
-from .core import ConstExpression
-
-
-CONST_LISTING = {
- "NaN": "not a number (same as JavaScript literal NaN)",
- "LN10": "the natural log of 10 (alias to Math.LN10)",
- "E": "the transcendental number e (alias to Math.E)",
- "LOG10E": "the base 10 logarithm e (alias to Math.LOG10E)",
- "LOG2E": "the base 2 logarithm of e (alias to Math.LOG2E)",
- "SQRT1_2": "the square root of 0.5 (alias to Math.SQRT1_2)",
- "LN2": "the natural log of 2 (alias to Math.LN2)",
- "SQRT2": "the square root of 2 (alias to Math.SQRT1_2)",
- "PI": "the transcendental number pi (alias to Math.PI)",
-}
-
-NAME_MAP: Dict[str, str] = {}
-
-
-def _populate_namespace():
- globals_ = globals()
- for name, doc in CONST_LISTING.items():
- py_name = NAME_MAP.get(name, name)
- globals_[py_name] = ConstExpression(name, doc)
- yield py_name
-
-
-__all__ = list(_populate_namespace())
diff --git a/spaces/colakin/video-generater/public/ffmpeg/compat/msvcrt/snprintf.c b/spaces/colakin/video-generater/public/ffmpeg/compat/msvcrt/snprintf.c
deleted file mode 100644
index 43f5c3bb390c4f322e39fb97a2cad12ed932c1d9..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/compat/msvcrt/snprintf.c
+++ /dev/null
@@ -1,71 +0,0 @@
-/*
- * C99-compatible snprintf() and vsnprintf() implementations
- * Copyright (c) 2012 Ronald S. Bultje
- *
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#include
-#include
-#include
-#include
-
-#include "compat/va_copy.h"
-#include "libavutil/error.h"
-
-#if defined(__MINGW32__)
-#define EOVERFLOW EFBIG
-#endif
-
-int avpriv_snprintf(char *s, size_t n, const char *fmt, ...)
-{
- va_list ap;
- int ret;
-
- va_start(ap, fmt);
- ret = avpriv_vsnprintf(s, n, fmt, ap);
- va_end(ap);
-
- return ret;
-}
-
-int avpriv_vsnprintf(char *s, size_t n, const char *fmt,
- va_list ap)
-{
- int ret;
- va_list ap_copy;
-
- if (n == 0)
- return _vscprintf(fmt, ap);
- else if (n > INT_MAX)
- return AVERROR(EOVERFLOW);
-
- /* we use n - 1 here because if the buffer is not big enough, the MS
- * runtime libraries don't add a terminating zero at the end. MSDN
- * recommends to provide _snprintf/_vsnprintf() a buffer size that
- * is one less than the actual buffer, and zero it before calling
- * _snprintf/_vsnprintf() to workaround this problem.
- * See https://web.archive.org/web/20151214111935/http://msdn.microsoft.com/en-us/library/1kt27hek(v=vs.80).aspx */
- memset(s, 0, n);
- va_copy(ap_copy, ap);
- ret = _vsnprintf(s, n - 1, fmt, ap_copy);
- va_end(ap_copy);
- if (ret == -1)
- ret = _vscprintf(fmt, ap);
-
- return ret;
-}
diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/lossless_videoencdsp.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/lossless_videoencdsp.h
deleted file mode 100644
index f2c287848569c60886ddcd980fad1ce8101899ee..0000000000000000000000000000000000000000
--- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/lossless_videoencdsp.h
+++ /dev/null
@@ -1,45 +0,0 @@
-/*
- * This file is part of FFmpeg.
- *
- * FFmpeg is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation; either
- * version 2.1 of the License, or (at your option) any later version.
- *
- * FFmpeg is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with FFmpeg; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
- */
-
-#ifndef AVCODEC_LOSSLESS_VIDEOENCDSP_H
-#define AVCODEC_LOSSLESS_VIDEOENCDSP_H
-
-#include
-#include
-
-typedef struct LLVidEncDSPContext {
- void (*diff_bytes)(uint8_t *dst /* align 16 */,
- const uint8_t *src1 /* align 16 */,
- const uint8_t *src2 /* align 1 */,
- intptr_t w);
- /**
- * Subtract HuffYUV's variant of median prediction.
- * Note, this might read from src1[-1], src2[-1].
- */
- void (*sub_median_pred)(uint8_t *dst, const uint8_t *src1,
- const uint8_t *src2, intptr_t w,
- int *left, int *left_top);
-
- void (*sub_left_predict)(uint8_t *dst, const uint8_t *src,
- ptrdiff_t stride, ptrdiff_t width, int height);
-} LLVidEncDSPContext;
-
-void ff_llvidencdsp_init(LLVidEncDSPContext *c);
-void ff_llvidencdsp_init_x86(LLVidEncDSPContext *c);
-
-#endif /* AVCODEC_LOSSLESS_VIDEOENCDSP_H */
diff --git a/spaces/congsaPfin/Manga-OCR/logs/BitLife God Mode APK The Ultimate Guide to Living Any Life You Want.md b/spaces/congsaPfin/Manga-OCR/logs/BitLife God Mode APK The Ultimate Guide to Living Any Life You Want.md
deleted file mode 100644
index 783abb22b57614e132a7920502fdaa6c58a6524d..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/BitLife God Mode APK The Ultimate Guide to Living Any Life You Want.md
+++ /dev/null
@@ -1,91 +0,0 @@
-
-BitLife God Mode APK: How to Download and Play with Unlimited Possibilities
-Have you ever wondered what your life would be like if you could control everything and everyone around you? If you could change your appearance, personality, skills, relationships, and even your fate? Well, now you can with BitLife God Mode APK, a modded version of the popular life simulator game BitLife. In this article, we will tell you what BitLife is, what BitLife God Mode APK is, how to download and install it, and how to play it with unlimited possibilities.
-What is BitLife?
-A realistic life simulator game
-BitLife is a game that lets you live a virtual life from birth to death. You can choose your name, gender, country, and appearance. You can also make decisions that affect your happiness, health, intelligence, looks, and money. You can go to school, get a job, date, get married, have kids, buy a house, commit crimes, and more. You can also interact with other characters in the game, such as your parents, siblings, friends, lovers, enemies, and celebrities. You can see how your choices shape your life and what kind of legacy you leave behind.
-bitlife god mode apk DOWNLOAD ✫ https://urlca.com/2uO5nl
-A game of choices and consequences
-BitLife is not just a game of simulation, but also a game of choices and consequences. Every decision you make has an impact on your life and the lives of others. Some choices are easy, some are hard, some are funny, some are serious, some are moral, some are immoral. Some choices can lead you to success, happiness, fame, or fortune. Some choices can lead you to failure, misery, infamy, or prison. Some choices can even lead you to death. You never know what will happen next in BitLife.
-What is BitLife God Mode APK?
-A modded version of BitLife with extra features
-BitLife God Mode APK is a modded version of BitLife that gives you access to extra features that are not available in the original game. One of these features is God Mode, which allows you to edit your character and other people in the game as you wish. You can change their name, age, gender, appearance, stats, traits, relationships, careers, assets, health conditions, and even their cause of death. You can also give yourself unlimited money and ribbons.
-A way to customize your character and other people in the game
-BitLife God Mode APK is a way to customize your character and other people in the game according to your preferences. You can create your ideal self or someone else you admire or despise. You can make yourself more attractive, smarter, richer, or happier. You can also make other people more loyal, friendly, supportive, or respectful towards you. You can also make them more ugly, stupid, poor, or miserable. You can also make them love you or hate you. You can also kill them or save them from death.
-How to download and install BitLife God Mode APK?
-The steps to download the APK file from a trusted source
-To download BitLife God Mode APK, you need to find a trusted source that provides the latest version of the modded file. One such source is APKdone.com , which is a reliable website that offers free and safe APK downloads for Android devices. You can visit the website and search for BitLife God Mode APK or click on this link: https://apkdone.com/bitlife-mod-apk/ . You will see a download button on the page. Click on it and wait for the download to complete.
-The steps to install the APK file on your device
-To install BitLife God Mode APK, you need to enable the installation of apps from unknown sources on your device. To do this, go to your device settings and look for the security or privacy option. Then, find the option that says "allow installation of apps from unknown sources" or something similar. Turn it on and confirm your choice. Then, go to your file manager and locate the downloaded APK file. Tap on it and follow the instructions to install it. Once the installation is done, you can open the app and enjoy BitLife God Mode APK.
-How to play BitLife God Mode APK?
-The benefits of having God Mode in BitLife
-Playing BitLife God Mode APK gives you many benefits that make the game more fun and exciting. Some of these benefits are:
-
-You can create your own character or edit an existing one with God Mode. You can change their name, age, gender, appearance, stats, traits, relationships, careers, assets, health conditions, and even their cause of death.
-You can also edit other people in the game with God Mode. You can change their name, age, gender, appearance, stats, traits, relationships, careers, assets, health conditions, and even their cause of death.
-You can give yourself unlimited money and ribbons with God Mode. You can buy anything you want and achieve any goal you have.
-You can also control the events and scenarios that happen in the game with God Mode. You can choose what kind of situations you want to encounter and how you want to react to them.
-You can also have fun with God Mode by experimenting with different outcomes and possibilities. You can see what happens if you do something crazy or outrageous. You can also see what happens if you do something good or noble.
-
-The tips and tricks to make the most of your life in BitLife
-Playing BitLife God Mode APK is not just about having fun, but also about making the most of your life in the game. Here are some tips and tricks to help you do that:
-
-Set a goal for yourself and work towards it. Whether you want to be rich, famous, happy, or successful, you need to have a clear vision of what you want and how to get it.
-Make smart choices and avoid bad ones. Even with God Mode, you still need to be careful about what you do and how it affects your life and others. Some choices can have negative consequences that can ruin your life or make you unhappy.
-Take care of your health and happiness. Even with God Mode, you still need to maintain your physical and mental well-being. You need to eat well, exercise regularly, visit the doctor when needed, avoid stress, and do things that make you happy.
-Build positive relationships with others. Even with God Mode, you still need to have good social skills and connections. You need to be kind, respectful, loyal, and supportive to your family, friends, lovers, and colleagues.
-Have fun and enjoy your life. Even with God Mode, you still need to have a sense of humor and adventure. You need to try new things, explore new places, learn new skills, and have fun with your hobbies.
-
-Conclusion
-A summary of the main points of the article
-In conclusion, BitLife God Mode APK is a modded version of BitLife that gives you access to extra features that are not available in the original game. One of these features is God Mode, which allows you to edit your character and other people in the game as you wish. You can also give yourself unlimited money and ribbons. To download and install BitLife God Mode APK, you need to find a trusted source that provides the latest version of the modded file. Then, you need to enable the installation of apps from unknown sources on your device. Then, you need to install the APK file on your device. To play BitLife God Mode APK, you need to create or edit your character with God Mode. You can also edit other people in the game with God Mode. You can also control the events and scenarios that happen in the game with God Mode. You can also have fun with God Mode by experimenting with different outcomes and possibilities
. You can also make the most of your life in the game by setting a goal for yourself, making smart choices, taking care of your health and happiness, building positive relationships with others, and having fun and enjoying your life.
-bitlife life simulator mod apk with god mode
-download bitlife god mode apk for android
-bitlife hack apk god mode unlocked
-how to get bitlife god mode for free apk
-bitlife premium apk with bitizenship and god mode
-bitlife simulator god mode apk latest version
-bitlife modded apk god mode 2023
-bitlife apk mod menu with god mode
-bitlife god mode apk no root
-bitlife god mode apk ios download
-bitlife cheats apk god mode enabled
-bitlife online simulator with god mode apk
-bitlife god mode apk reddit
-bitlife simulator apk full version with god mode
-bitlife unlimited money and god mode apk
-bitlife god mode apk for pc
-bitlife cracked apk with god mode feature
-bitlife simulator tips and tricks for god mode apk
-bitlife god mode apk free download 2023
-bitlife update apk with god mode option
-bitlife simulator review of god mode apk
-bitlife best life simulator with god mode apk
-bitlife mod apk unlimited everything and god mode
-how to install bitlife god mode apk on android
-bitlife simulator gameplay with god mode apk
-bitlife pro apk with god mode and no ads
-bitlife simulator guide for god mode apk users
-bitlife modded version with god mode and more apk
-how to activate god mode in bitlife simulator apk
-bitlife simulator challenges with god mode apk
-how to play bitlife simulator with god mode apk on pc
-bitlife simulator fun scenarios with god mode apk
-how to update bitlife simulator to get god mode apk
-how to download and install bitlife modded version with God Mode APK on iOS devices.
-A call to action for the readers
-If you are interested in playing BitLife God Mode APK, you can download it from the link below and start living your dream life. You can also share your experiences and feedback with us in the comments section. We would love to hear from you. Thank you for reading this article and have a great day!
-Download BitLife God Mode APK here
-FAQs
-What is BitLife?
-BitLife is a realistic life simulator game that lets you live a virtual life from birth to death. You can make decisions that affect your happiness, health, intelligence, looks, and money. You can also interact with other characters in the game.
-What is BitLife God Mode APK?
-BitLife God Mode APK is a modded version of BitLife that gives you access to extra features that are not available in the original game. One of these features is God Mode, which allows you to edit your character and other people in the game as you wish. You can also give yourself unlimited money and ribbons.
-How to download and install BitLife God Mode APK?
-To download and install BitLife God Mode APK, you need to find a trusted source that provides the latest version of the modded file. Then, you need to enable the installation of apps from unknown sources on your device. Then, you need to install the APK file on your device.
-How to play BitLife God Mode APK?
-To play BitLife God Mode APK, you need to create or edit your character with God Mode. You can also edit other people in the game with God Mode. You can also control the events and scenarios that happen in the game with God Mode. You can also have fun with God Mode by experimenting with different outcomes and possibilities. You can also make the most of your life in the game by setting a goal for yourself, making smart choices, taking care of your health and happiness, building positive relationships with others, and having fun and enjoying your life.
-Is BitLife God Mode APK safe to use?
-BitLife God Mode APK is safe to use as long as you download it from a trusted source and follow the instructions carefully. However, you should be aware that using modded apps may violate the terms and conditions of the original game and may result in bans or other consequences. You should also be careful about what you do with God Mode and respect the rights and feelings of other people in the game.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Redline v3.0 from Mega and Hack Roblox Like a Pro.md b/spaces/congsaPfin/Manga-OCR/logs/Download Redline v3.0 from Mega and Hack Roblox Like a Pro.md
deleted file mode 100644
index 27fb73e62da4334fd3caf8123c32d0dae898f839..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Download Redline v3.0 from Mega and Hack Roblox Like a Pro.md
+++ /dev/null
@@ -1,119 +0,0 @@
-
-Red Line v3.0: A Roblox Cheat You Need to Know
-Roblox is one of the most popular online gaming platforms in the world, with millions of players creating and exploring various games every day. However, not everyone is satisfied with the default gameplay and limitations of Roblox. Some players want to have more fun, freedom, and power in their gaming experience. That's why they use cheats and hacks to modify the game and gain an edge over other players.
-red line v3 0 download mega DOWNLOAD ✶✶✶ https://urlca.com/2uOcf7
-One of the most widely used cheats for Roblox is Red Line v3.0, a powerful exploit that allows you to do almost anything you want in any Roblox game. In this article, we will tell you everything you need to know about Red Line v3.0, including what it is, how to download it from Mega, how to install it on your PC, how to use it in Roblox games, and why you should or shouldn't use it.
-What is Red Line v3.0?
-Red Line v3.0 is a cheat or exploit for Roblox that enables you to manipulate the game code and execute various commands and scripts that are not normally possible in the game. It is also known as a level 7 executor, which means it can run any script or code that you want in Roblox.
-Features of Red Line v3.0
-Red Line v3.0 has many features that make it one of the best cheats for Roblox. Some of these features are:
-
-Easy to use interface: Red Line v3.0 has a simple and user-friendly interface that allows you to easily access and activate different functions and options.
-Multiple games support: Red Line v3.0 works on almost any Roblox game, whether it is popular or obscure, new or old, single-player or multiplayer.
-Customizable settings: Red Line v3.0 lets you customize various settings and preferences according to your needs and preferences.
-Frequent updates: Red Line v3.0 is constantly updated by its developers to ensure that it works smoothly and efficiently on the latest version of Roblox and its games.
-Lots of scripts: Red Line v3.0 comes with a huge library of scripts that you can use to perform different actions and effects in Roblox games. You can also add your own scripts or download scripts from other sources.
-
-How to download Red Line v3.0 from Mega
-Mega is one of the most popular file-sharing platforms on the internet, where you can upload and download files for free. It is also where you can find the latest version of Red Line v3.0 for download.
-red line v3 0 roblox cheat download
-how to install red line v3 0 for roblox
-red line v3 0 mega link
-red line v3 0 updated commands
-red line v3 0 jailbreak hack
-red line v3 0 phantom forces hack
-red line v3 0 nonsense diamond 1.3
-red line v3 0 roblox profile
-red line v3 0 filmora free mode
-red line v3 0 scree o matic free
-red line v3 0 roblox exploit download
-how to use red line v3 0 in roblox
-red line v3 0 mega download link
-red line v3 0 updated exe
-red line v3 0 jailbreak script
-red line v3 0 phantom forces script
-red line v3 0 nonsense diamond download
-red line v3 0 roblox hack youtube
-red line v3 0 filmora video editor
-red line v3 0 scree o matic recorder
-red line v3 0 roblox exploit mega
-how to fix red line v3 0 errors
-red line v3 0 mega free download
-red line v3 0 updated features
-red line v3 0 jailbreak money hack
-red line v3 0 phantom forces aimbot
-red line v3 0 nonsense diamond tutorial
-red line v3 0 roblox hack reddit
-red line v3 0 filmora crack download
-red line v3 0 scree o matic tutorial
-red line v3 0 roblox exploit reddit
-how to update red line v3 0 version
-red line v3 0 mega no virus
-red line v3 0 updated tutorial
-red line v3 0 jailbreak auto rob
-red line v3 0 phantom forces esp
-red line v3 0 nonsense diamond review
-red line v3 0 roblox hack discord
-red line v3 0 filmora activation key
-red line v3 0 scree o matic pro
-To download Red Line v3.0 from Mega, follow these steps:
-
-Go to this link: [4](https://mega.nz/#)!qjATTapB!Q2vruqMDHv...
-Click on the "Download" button and wait for the file to be downloaded on your PC.
-The file name should be "Redline_v30.zip" and the file size should be around 10 MB.
-
-How to install Red Line v3.0 on your PC
-After downloading Red Line v3.0 from Mega, you need to install it on your PC before you can use it in Roblox games. To install Red Line v3.0 on your PC, follow these steps:
-
-Extract the "Redline_v30.zip" file using a program like WinRAR or 7-Zip.
-Open the extracted folder and double-click on the "Redline.exe" file to run the cheat.
-You may see a warning message from your antivirus or firewall, asking you to allow or block the cheat. Choose to allow or unblock the cheat, as it is not harmful to your PC.
-You should see a window with the Red Line v3.0 logo and a menu with different options and tabs.
-Congratulations, you have successfully installed Red Line v3.0 on your PC!
-
-How to use Red Line v3.0 in Roblox games
-Now that you have installed Red Line v3.0 on your PC, you can use it to cheat and hack in any Roblox game you want. To use Red Line v3.0 in Roblox games, follow these steps:
-
-Open Roblox on your PC and log in to your account.
-Choose a game that you want to play and join a server.
-Switch back to the Red Line v3.0 window and select the "Scripts" tab.
-Browse through the list of scripts and choose one that suits your needs and preferences. You can also search for scripts by name or category.
-Click on the "Execute" button to run the script in the game.
-You should see a confirmation message on the top left corner of the game screen, indicating that the script has been executed successfully.
-Enjoy the game with your new cheat and hack!
-
-Why use Red Line v3.0?
-Red Line v3.0 is one of the most popular and powerful cheats for Roblox, but why should you use it? What are the benefits and risks of using it? Let's find out.
-Benefits of using Red Line v3.0
-Using Red Line v3.0 can give you many advantages and benefits in Roblox games, such as:
-
-Get unlimited Robux and other resources: Robux is the currency of Roblox, which you can use to buy items, skins, game passes, and more. However, getting Robux can be hard and expensive, especially if you want to get a lot of them. With Red Line v3.0, you can get unlimited Robux and other resources for free, without spending any real money or time.
-Unlock all items and skins: There are thousands of items and skins in Roblox, which you can use to customize your avatar and make it look cool and unique. However, some of these items and skins are locked behind paywalls or require certain achievements or levels to unlock them. With Red Line v3.0, you can unlock all items and skins in Roblox, without any restrictions or limitations.
-Bypass anti-cheat systems and avoid bans: Roblox has a strict policy against cheating and hacking, which means that if you are caught using cheats or hacks in Roblox games, you may face consequences such as account suspension or termination. However, with Red Line v3.0, you can bypass anti-cheat systems and avoid bans, as it is undetectable by most anti-cheat software and has a built-in anti-ban feature that protects your account from being banned.
-
-Risks of using Red Line v3.0
-However, using Red Line v3.0 also comes with some risks and drawbacks that you should be aware of, such as:
-
-Potential malware and viruses: Although Red Line v3.0 is not harmful to your PC, there is always a possibility that some malicious files or programs may be hidden or attached to it, which may infect your PC with malware or viruses. Therefore, you should always scan the cheat file with a reliable antivirus program before installing it on your PC.
-Account suspension or termination: Even though Red Line v3.0 has an anti-ban feature that protects your account from being banned, there is still a chance that you may get caught by Roblox moderators or other players who may report you for cheating or hacking. If this happens, you may face account suspension or termination, which means that you will lose access to your account and all its data and progress. Ethical and legal issues: Cheating and hacking in Roblox games is not only against the rules and terms of service of Roblox, but also unethical and unfair to other players who play the game legitimately and honestly. Moreover, some cheats and hacks may violate the intellectual property rights of Roblox or its game developers, which may result in legal actions or lawsuits against you. Therefore, you should always respect the rights and interests of Roblox and its community, and use cheats and hacks at your own risk and responsibility.
-
-Conclusion
-Red Line v3.0 is a cheat or exploit for Roblox that allows you to do almost anything you want in any Roblox game. It has many features and functions that can enhance your gaming experience and give you an edge over other players. However, it also has some risks and drawbacks that you should be aware of, such as potential malware and viruses, account suspension or termination, and ethical and legal issues.
-Therefore, before you decide to use Red Line v3.0, you should weigh the pros and cons carefully, and consider the consequences and implications of your actions. You should also follow the instructions on how to download, install, and use Red Line v3.0 properly, to avoid any problems or errors.
-We hope this article has given you a comprehensive overview of Red Line v3.0, and helped you make an informed decision on whether to use it or not. If you have any questions or comments, feel free to leave them below. Thank you for reading!
-FAQs
-Here are some frequently asked questions about Red Line v3.0:
-
-Q: Is Red Line v3.0 safe to use?
-A: Red Line v3.0 is not harmful to your PC, but it may contain some malware or viruses that may infect your PC. Therefore, you should always scan the cheat file with a reliable antivirus program before installing it on your PC.
-Q: Is Red Line v3.0 free to use?
-A: Yes, Red Line v3.0 is free to use, and you can download it from Mega without paying any money.
-Q: Does Red Line v3.0 work on all Roblox games?
-A: Yes, Red Line v3.0 works on almost any Roblox game, whether it is popular or obscure, new or old, single-player or multiplayer.
-Q: How often is Red Line v3.0 updated?
-A: Red Line v3.0 is constantly updated by its developers to ensure that it works smoothly and efficiently on the latest version of Roblox and its games.
-Q: Where can I find more scripts for Red Line v3.0?
-A: You can find more scripts for Red Line v3.0 on various websites and forums that specialize in Roblox cheats and hacks, such as [5](https://v3rmillion.net/) or [6](https://robloxscripts.com/). You can also create your own scripts or download scripts from other sources.
- 401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Far Cry 6 APK The Best Way to Play the Game on Your Mobile Device.md b/spaces/congsaPfin/Manga-OCR/logs/Far Cry 6 APK The Best Way to Play the Game on Your Mobile Device.md
deleted file mode 100644
index d5f78083a2a1bc9a0956db2fc5a7c8361cf0fcad..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Far Cry 6 APK The Best Way to Play the Game on Your Mobile Device.md
+++ /dev/null
@@ -1,119 +0,0 @@
-
-Far Cry 6 APK: How to Download and Play the Latest Installment of the Popular Shooter Series on Your Mobile Device
- Introduction
- Are you a fan of the Far Cry series, the action-adventure shooter games that take you to exotic and dangerous locations around the world? If so, you must be excited about the upcoming release of Far Cry 6, the sixth main entry in the franchise. But did you know that you can also play Far Cry 6 on your mobile device, thanks to the Far Cry 6 APK?
- What is Far Cry 6?
- Far Cry 6 is a video game developed by Ubisoft Toronto and published by Ubisoft. It is set to be released on October 7, 2023, for Windows, PlayStation 4, PlayStation 5, Xbox One, Xbox Series X/S, and Stadia. It is also available for mobile devices through the Far Cry 6 APK, which is a modified version of the game that can run on Android and iOS platforms.
-far cry 6 apk DOWNLOAD ✪✪✪ https://urlca.com/2uObpd
- What are the features of Far Cry 6?
- Far Cry 6 is a first-person shooter game that lets you experience the thrill and chaos of guerrilla warfare in a fictional Caribbean island called Yara. You will play as Dani Rojas, a local Yaran who joins a resistance movement called Libertad, to overthrow the tyrannical regime of Anton Castillo, a dictator who wants to restore Yara to its former glory by any means necessary.
- Some of the features of Far Cry 6 are:
-
-A massive open-world map that you can explore by foot, vehicle, or animal.
-A variety of weapons, vehicles, and gadgets that you can customize and craft to suit your playstyle.
-A dynamic weather system and a day-night cycle that affect the gameplay and environment.
-A co-op mode that allows you to play with a friend online or offline.
-A multiplayer mode that pits you against other players in various modes and maps.
-
- How to download and install Far Cry 6 APK on your Android or iOS device?
- If you want to play Far Cry 6 on your mobile device, you will need to download and install the Far Cry 6 APK file. This is a simple process that only takes a few minutes. Here are the steps you need to follow:
-
-Go to https://apkrey.com/far-cry-6-mobile-download/ , which is a trusted and reliable source for downloading the Far Cry 6 APK file.
-Click on the download button and wait for the file to be downloaded on your device.
-Once the download is complete, locate the file in your device's storage and tap on it to start the installation process.
-Follow the instructions on the screen and allow the necessary permissions for the app to run smoothly.
-After the installation is done, launch the app and enjoy playing Far Cry 6 on your mobile device.
-
- Main Body
- The story and setting of Far Cry 6
- Far Cry 6 takes place in Yara, a fictional island nation in the Caribbean that is inspired by Cuba. Yara is a tropical paradise that has been frozen in time due to a decades-long economic embargo. However, beneath its beauty lies a dark and
oppressive history of violence and corruption. You will join the ranks of Libertad, a ragtag group of rebels who fight for freedom and democracy against Castillo's regime.
- The island of Yara and its dictator Anton Castillo
- Yara is a fictional island nation in the Caribbean that is inspired by Cuba. It is a tropical paradise that has been frozen in time due to a decades-long economic embargo. However, beneath its beauty lies a dark and oppressive history of violence and corruption. Yara is ruled by Anton Castillo, a dictator who wants to restore Yara to its former glory by any means necessary. He is a charismatic and ruthless leader who uses propaganda, fear, and force to maintain his power. He is also grooming his son Diego to be his successor, but Diego has doubts about his father's methods and motives.
- The protagonist Dani Rojas and the resistance movement Libertad
- You will play as Dani Rojas, a local Yaran who joins a resistance movement called Libertad, to overthrow Castillo's regime. Dani is an orphan who was sent to a military academy at 16, where they learned how to handle a gun until they were dishonorably discharged and sent out to the streets. Dani is a reluctant hero who initially wants to escape Yara, but decides to stay and fight for their homeland after witnessing the atrocities committed by Castillo's forces. Libertad is a group of rebels who fight for freedom and democracy against Castillo's tyranny. They are led by Clara Garcia, a charismatic and idealistic leader who believes in the power of the people. Libertad consists of various factions and allies, such as the Montero clan, the Maximas Matanzas, the Legends of '67, and La Moral.
- The gameplay and mechanics of Far Cry 6
- Far Cry 6 is a first-person shooter game that lets you experience the thrill and chaos of guerrilla warfare in Yara. You will have access to a variety of weapons, vehicles, gadgets, and animal companions (called Amigos) that you can use to explore, fight, and survive in the open-world environment. You will also be able to customize and craft your weapons and gear to suit your playstyle and preferences.
-far cry 6 mobile apk download
-far cry 6 android apk obb
-far cry 6 apk mod unlimited money
-far cry 6 apk data offline
-far cry 6 apk free download for android
-far cry 6 apk + sd data
-far cry 6 apk full version
-far cry 6 apk revdl
-far cry 6 apk rexdl
-far cry 6 apk pure
-far cry 6 apk mirror
-far cry 6 apk highly compressed
-far cry 6 apk latest version
-far cry 6 apk hack
-far cry 6 apk cracked
-far cry 6 apk no verification
-far cry 6 apk gameplay
-far cry 6 apk requirements
-far cry 6 apk size
-far cry 6 apk andropalace
-far cry 6 apk uptodown
-far cry 6 apk apkpure
-far cry 6 apk android republic
-far cry 6 apk android 1
-far cry 6 apk android oyun club
-far cry 6 apk best settings
-far cry 6 apk beta version
-far cry 6 apk cheats
-far cry 6 apk download link
-far cry 6 apk english version
-far cry 6 apk for pc
-far cry 6 apk for ios
-far cry 6 apk game download
-far cry 6 apk google drive
-far cry 6 apk how to install
-far cry 6 apk ios download
-far cry 6 apk low mb
-far cry 6 apk mega link
-far cry 6 apk new update
-far cry 6 apk offline mode
-far cry 6 apk original file
-far cry 6 apk play store
-far cry 6 apk release date
-far cry 6 apk system requirements
-far cry 6 apk unlimited ammo
-far cry 6 apk video review
-far cry 6 apk without human verification
-far cry 6 apk xda developers
-far cry 6 apk youtube trailer
- The open-world exploration and combat
- Far Cry 6 features a massive open-world map that you can explore by foot, vehicle, or animal. You can travel across different regions of Yara, such as jungles, beaches, cities, farms, and mountains. You can also discover various points of interest, such as outposts, checkpoints, FND caches, Yaran contraband, treasure hunts, criptograma chests, relics, side missions, and more. You can also interact with various characters and factions that will offer you quests, information, rewards, or challenges.
- The combat in Far Cry 6 is fast-paced and explosive. You can choose to approach your enemies stealthily or loudly, depending on your strategy and situation. You can use various weapons, such as rifles, shotguns, pistols, bows, launchers, resolver weapons (which are improvised weapons made from scrap materials), and supremo weapons (which are powerful backpack devices that can unleash devastating effects). You can also use various gadgets, such as grenades, mines, throwing knives, molotovs, EMPs, medkits, and more. You can also use your animal companions (such as Chorizo the wiener dog or Guapo the crocodile) to distract or attack your enemies.
- The customization and crafting options
- Far Cry 6 allows you to customize and craft your weapons and gear to fit your playstyle and preferences. You can modify your weapons with various attachments, such as scopes, silencers, magazines, barrels, stocks, and skins. You can also craft your own resolver weapons from various parts and materials that you can find or buy in the world. You can also customize your gear, such as your clothes, shoes, hats, glasses, masks, and backpacks. You can also craft your own supremo weapons from various blueprints and components that you can unlock or earn in the game.
- The co-op and multiplayer modes
- Far Cry 6 also offers a co-op mode that allows you to play with a friend online or offline. You can join forces with another player and explore the entire map of Yara together. You can also share your weapons, vehicles, and gadgets with your co-op partner. You can also tackle any mission, activity, or challenge together, and enjoy the dynamic events and surprises that the game will throw at you.
- The multiplayer mode of Far Cry 6 is still under development, but it is expected to feature various modes and maps that will pit you against other players in competitive matches. You will be able to use your customized weapons and gear in the multiplayer mode, and earn rewards and ranks for your performance.
- Conclusion
- Far Cry 6 is a game that promises to deliver an immersive and exhilarating experience of guerrilla warfare in a tropical paradise. You will be able to explore a vast and diverse open-world map, customize and craft your weapons and gear, and fight against a ruthless dictator and his army. You will also be able to play with a friend in co-op mode, or challenge other players in multiplayer mode. Far Cry 6 is a game that you should not miss if you are a fan of the shooter genre.
- Why you should play Far Cry 6 APK on your mobile device
- If you want to enjoy Far Cry 6 on your mobile device, you should download and install the Far Cry 6 APK file. This is a modified version of the game that can run on Android and iOS platforms. By playing Far Cry 6 APK on your mobile device, you will be able to:
-
-Play the game anytime and anywhere, without the need for a console or a PC.
-Experience the game with high-quality graphics and sound, optimized for your device's specifications.
-Control the game with intuitive touch-screen controls, or use a compatible controller for more accuracy and comfort.
-Save your progress on the cloud, and sync it across different devices.
-Access exclusive content and features that are only available for mobile users.
-
- FAQs
- Here are some frequently asked questions about Far Cry 6 APK:
-
-Q: Is Far Cry 6 APK safe to download and install?
-A: Yes, Far Cry 6 APK is safe to download and install, as long as you use a trusted and reliable source like https://apkrey.com/far-cry-6-mobile-download/ . This site has been verified by many users and reviewers, and it does not contain any viruses or malware.
-Q: Is Far Cry 6 APK free to play?
-A: Yes, Far Cry 6 APK is free to play, but it may contain some in-app purchases or ads that you can choose to support or ignore.
-Q: Do I need an internet connection to play Far Cry 6 APK?
-A: Yes, you will need an internet connection to play Far Cry 6 APK, as it is an online game that requires data transfer and verification. However, you can also play the game offline if you have already downloaded the necessary files on your device.
-Q: How much storage space do I need to download and install Far Cry 6 APK?
-A: You will need at least 4 GB of free storage space on your device to download and install Far Cry 6 APK. However, you may need more space if you want to download additional content or updates for the game.
-Q: Can I play Far Cry 6 APK with my friends?
-A: Yes, you can play Far Cry 6 APK with your friends in co-op mode or multiplayer mode. You can invite your friends to join your game through the app's interface, or join their games through the same method. You can also chat with your friends through voice or text messages while playing the game.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/How to Download 4K Hindi Video Songs from 2020 in Easy Steps.md b/spaces/congsaPfin/Manga-OCR/logs/How to Download 4K Hindi Video Songs from 2020 in Easy Steps.md
deleted file mode 100644
index efb6a2f5097db2121a58a04fc2ab79ba9b0f18b3..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/How to Download 4K Hindi Video Songs from 2020 in Easy Steps.md
+++ /dev/null
@@ -1,88 +0,0 @@
-
-How to Download 4K Hindi Video Songs in 2020
-If you are a fan of Hindi music, you might have heard of 4K video songs. These are songs that have a resolution of 3840 x 2160 pixels, which is four times higher than the standard HD resolution of 1920 x 1080 pixels. 4K video songs offer a stunning visual experience that can make you feel like you are watching a live performance. However, downloading 4K video songs can be tricky, as they have some challenges and limitations. In this article, we will explain what are 4K video songs, why are they popular, what are the benefits and challenges of 4K video songs, and how to download them easily and efficiently.
-4k hindi video songs download 2020 Download ☆ https://urlca.com/2uOasX
- Benefits of 4K Video Songs
-4K video songs have many advantages over lower-resolution videos. Here are some of the benefits of 4K video songs:
-
-Higher resolution: 4K video songs have a resolution of 3840 x 2160 pixels, which is four times higher than the standard HD resolution of 1920 x 1080 pixels. This means that they have more details, clarity, and sharpness than HD videos. You can see every facial expression, every movement, and every color more vividly and realistically.
-Better quality: 4K video songs have a higher bit rate than HD videos, which means that they have more data and information per second. This results in better quality, as there is less compression, noise, and distortion in the video. You can enjoy a smoother and clearer playback without any lag or buffering.
-Immersive experience: 4K video songs can create an immersive experience for the viewers, as they can fill up the entire screen of a large TV or monitor. You can feel like you are part of the scene, as the images are more lifelike and realistic. You can also enjoy a wider viewing angle, as the images are not stretched or distorted.
-
- Challenges of 4K Video Songs
-Despite the benefits of 4K video songs, they also have some drawbacks and limitations. Here are some of the challenges of 4K video songs:
-
-Large file size: 4K video songs have a large file size, as they have more pixels and data than HD videos. For example, a typical HD video song might have a file size of around 100 MB, while a typical 4K video song might have a file size of around 400 MB. This means that downloading or storing 4K video songs can take up a lot of space and time on your device.
-Limited compatibility: Not all devices and platforms support 4K video songs. You need to have a device that has a screen resolution of at least 3840 x 2160 pixels to watch 4K video songs in their full glory. You also need to have a compatible media player or browser that can play 4K videos without any issues. Some older devices or platforms might not be able to play or display 4K videos at all.
-High bandwidth requirement: To stream or download 4K video songs online, you need to have a high-speed internet connection that can handle the large amount of data. According to Netflix, you need to have at least a speed of 25 Mbps to stream their content in Ultra HD quality. If your internet connection is slow or unstable, you might experience buffering, lagging, or poor quality when watching or downloading 4K video songs.
-
- Solutions for 4K Video Songs
-If you
If you want to download 4K video songs in 2020, you have some options to overcome the challenges and enjoy the benefits. Here are some of the solutions for 4K video songs:
-
-4K video downloader: This is a software or app that can download 4K video songs from various online sources, such as YouTube, Vimeo, Dailymotion, etc. You can choose the quality, format, and destination of the downloaded videos, and also convert them to other formats if needed. Some examples of 4K video downloaders are 4K Video Downloader, WinX HD Video Converter Deluxe, VideoProc, etc.
-Online converter: This is a website or service that can convert 4K video songs to other resolutions or formats online. You can upload the 4K video song from your device or paste the URL of the online source, and then choose the output quality, format, and size. Some examples of online converters are Online Video Converter, Convert2MP3, ClipConverter, etc.
-Streaming platforms: This is a website or app that can stream 4K video songs online without downloading them. You can watch the 4K video songs on your device or cast them to a larger screen. Some examples of streaming platforms that offer 4K video songs are YouTube, Netflix, Amazon Prime Video, etc.
-
- Conclusion
-4K video songs are songs that have a resolution of 3840 x 2160 pixels, which is four times higher than the standard HD resolution of 1920 x 1080 pixels. They offer a stunning visual experience that can make you feel like you are watching a live performance. However, they also have some challenges and limitations, such as large file size, limited compatibility, and high bandwidth requirement. To download 4K video songs in 2020, you can use a 4K video downloader, an online converter, or a streaming platform. Here are some tips for downloading 4K video songs:
-
-Check your device and platform compatibility: Make sure that your device and platform can support 4K video songs before downloading or streaming them. You can check the screen resolution, media player, browser, and system requirements of your device and platform.
-Choose the best quality and format: Depending on your preference and purpose, you can choose the best quality and format for your 4K video songs. You can opt for MP4, MKV, AVI, MOV, etc. formats, and also adjust the bit rate, frame rate, aspect ratio, etc.
-Use a reliable internet connection: To download or stream 4K video songs smoothly and quickly, you need to have a reliable internet connection that can handle the large amount of data. You can use a wired connection or a Wi-Fi connection with a speed of at least 25 Mbps.
-
- FAQs
-Here are some frequently asked questions about 4K video songs:
-Thoda Thoda Pyaar Full Video Song 4k 60fps
-Sidharth Malhotra & Neha Sharma 4K Songs
-Free Amazon Prime for 1 Month (India)
-Teri Nazar Ne Yeh Kya Kar Diya Lyrics
-Ke Thoda Thoda Pyaar Hua Tumse
-Meri Aankhon Ki Dua Hai Yeh Chehra Tera
-Zee Music Originals 4K Videos
-Barsaat Ki Dhun - Full 4K Song
-Jubin Nautiyal, Gurmeet Choudhary 4K Songs
-Rochak Kohli New Song 2022
-Burjkhalifa - Full Video 4K
-Laxmii Akshay Kumar Kiara Advani 4K Songs
-Nikhita Gandhi Shashi-Dj Khushi 4K Songs
-Meri Zindagi Hai Tu Lyrical 4K
-Satyameva Jayate 2 John A Divya K Kumar 4K Songs
-Himesh R Kamaal K Palak M 4K Songs
-Naiyo Lagda - Kisi Ka Bhai Kisi Ki Jaan 4K
-Salman Khan & Pooja Hegde 4K Songs
-Jubin Nautiyal New Song Romantic Hits 4K
-Audio Jukebox Jubin Hit Songs Collection 4K
-YRF MD Asaduzzaman 4K Songs Download
-sad boy 2 days ago 4K Videos
-Rakib Sultan Music 33M views 4K Songs
-King Of 90's 73M views 4K Songs
-SonyMusicIndiaVEVO 102M views 4K Songs
-Hedayat Ullah Rasel 16M views 4K Songs
-Stebin Ben 99M views 4K Songs
-T-Series Zee Music Company Zeenu Zen 4K Songs
-Thoda Thoda Pyaar Female Song 2 of 2 Song in 4K
-Nilesh Ahuja Artist of Zee Music Originals in 4K
-Zeemusiccompany License of Zee Music Company in 4K
-LatinAutor LatinAutorPerf Zee Music Publishing in 4K
-Get YouTube Premium Music in 4K Quality
-New Scientist Korean nuclear fusion reactor in 4K Video
-The Sun Inside ‘holy grail’ fusion experiments in 4K Video
-Yahoo News Nuclear fusion breakthrough in 4K Video
-Wikipedia Sun Solar core Solar atmosphere in 4K Video
-Montana Core of the Sun's energy in 4K Video
-Cornell University How hot is each one of the layers of the sun in 4K Video
-NASA Sun Fact Sheet in 4K Video
-
-What is the difference between 4K and Ultra HD?
-4K and Ultra HD are often used interchangeably to refer to the same resolution of 3840 x 2160 pixels. However, technically speaking, 4K is a cinema standard that has a resolution of 4096 x 2160 pixels, while Ultra HD is a consumer standard that has a resolution of 3840 x 2160 pixels.
-How much space does a 4K video song take?
-The exact space that a 4K video song takes depends on various factors, such as the length, bit rate, format, compression, etc. However, as a general rule of thumb, a typical 4K video song might have a file size of around 400 MB per minute.
-How can I play 4K video songs on my TV?
-To play 4K video songs on your TV, you need to have a TV that has a screen resolution of at least 3840 x 2160 pixels. You also need to have a compatible media player or device that can play or cast 4K videos to your TV. Some examples are Blu-ray players, Some examples are Blu-ray players, streaming devices, gaming consoles, etc. You can also use a HDMI cable or a wireless connection to connect your device to your TV.
-Where can I find 4K video songs online?
-There are many online sources that offer 4K video songs, such as YouTube, Vimeo, Dailymotion, etc. You can search for 4K video songs by using keywords like "4K", "UHD", "2160p", etc. You can also filter the results by quality, duration, genre, etc.
-How can I download 4K video songs from YouTube?
-To download 4K video songs from YouTube, you can use a 4K video downloader software or app, such as 4K Video Downloader, WinX HD Video Converter Deluxe, VideoProc, etc. You can also use an online converter website or service, such as Online Video Converter, Convert2MP3, ClipConverter, etc. You just need to copy and paste the URL of the YouTube video song and choose the output quality and format.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/congsaPfin/Manga-OCR/logs/Temple Run 2 Earth Day Mod Apk A Guide for Beginners.md b/spaces/congsaPfin/Manga-OCR/logs/Temple Run 2 Earth Day Mod Apk A Guide for Beginners.md
deleted file mode 100644
index ca30267db5ca3c93f505ba5828b23b924a79dc80..0000000000000000000000000000000000000000
--- a/spaces/congsaPfin/Manga-OCR/logs/Temple Run 2 Earth Day Mod Apk A Guide for Beginners.md
+++ /dev/null
@@ -1,124 +0,0 @@
-
-Temple Run 2 Earth Day Mod Apk: What You Need to Know
-If you are a fan of Temple Run 2, one of the most popular running games on Android devices, you might be interested in trying out a mod apk that celebrates Earth Day. In this article, we will tell you everything you need to know about Temple Run 2 Earth Day Mod Apk, including what it is, how to download and install it, how to play it, and what are its advantages and disadvantages.
- What is Temple Run 2?
-Temple Run 2 is a sequel to the original Temple Run game that was released in 2011 by Imangi Studios. It is an endless runner game where you have to control a character who is escaping from a group of evil monkeys after stealing a cursed idol from a temple. Along the way, you have to avoid various obstacles such as cliffs, zip lines, mines, waterfalls, and fire jets by swiping left or right to turn, up to jump, or down to slide.
-temple run 2 earth day mod apk Download Zip >>> https://urlca.com/2uO4na
-Temple Run 2 features beautiful graphics, new environments, new obstacles, new power-ups, new achievements, and special powers for each character. You can also choose from different characters to play with, such as Guy Dangerous, Scarlett Fox, Barry Bones, Karma Lee, Montana Smith, Zack Wonder, Francisco Montoya, Maria Selva, Rahi Raaja, Nidhi Nirmal, Usain Bolt, Bruce Lee, Cleopatra, Imhotep, Wolfman, Sir Montague, Santa Claus, Mrs. Claus, Freya Coldheart, Sigur Frostbeard, Pirate Covey Covey Covey Covey Covey Covey Covey Covey Covey Covey Covey Covey Covey Covey Covey Covey Covey Covey Covey Covey Covey Covey Covey Covey Covey Covey Covey Covey
Temple Run 2 has been downloaded over 500 million times on Google Play Store and has received positive reviews from critics and players alike. It is one of the most addictive and fun games on mobile devices.
- What is Earth Day?
-Earth Day is an annual event that celebrates the planet Earth and raises awareness about environmental issues. The first Earth Day was held on April 22nd in 1970 by Senator Gaylord Nelson who wanted to inspire people to take action for environmental protection. Since then, Earth Day has become a global movement that involves millions of people from more than 190 countries.
-Earth Day aims to educate people about the importance of preserving natural resources for future generations. Some
Some of the activities that people do on Earth Day include planting trees, cleaning up litter, recycling, signing petitions, donating to environmental causes, and participating in rallies and marches. Earth Day also encourages people to adopt eco-friendly habits such as using renewable energy, reducing waste, conserving water, and eating organic food.
- What is a mod apk?
-A mod apk is a modified version of an original application that has been altered by a third-party developer to add or remove some features. A mod apk can also be called a hacked apk or a cracked apk. A mod apk can provide users with benefits such as unlimited resources, unlocked items, premium features, ad-free experience, and more. However, a mod apk can also pose some risks such as malware infection, data theft, legal issues, and ethical concerns.
-A mod apk works by bypassing the security measures of the original application and changing some of its code. A mod apk can be downloaded from various websites that host them. However, not all mod apks are safe and reliable. Some of them may contain viruses, spyware, or other malicious software that can harm your device or steal your personal information. Therefore, you should always be careful when downloading and installing a mod apk and only use trusted sources.
- What is Temple Run 2 Earth Day Mod Apk?
-Temple Run 2 Earth Day Mod Apk is a special version of Temple Run 2 that was created by a fan to celebrate Earth Day. It is a mod apk that has some unique features that are not available in the original game. Some of these features are:
-
-Unlimited coins and gems: You can get unlimited coins and gems in the game without spending any real money. You can use them to buy power-ups, upgrade your abilities, unlock new characters, and more.
-Unlocked characters: You can play with any character you want without having to unlock them first. You can choose from the original characters or the new ones that are added for Earth Day. Some of the new characters are Earth Warrior, Eco Ranger, Green Guardian, and Nature Lover.
-New environment: You can run in a new environment that is inspired by Earth Day. It is a green and lush forest with trees, flowers, animals, and waterfalls. You can also see some signs and banners that promote environmental awareness and protection.
-New obstacles: You can face new challenges and dangers in the new environment. You have to avoid falling rocks, swinging vines, angry bees, and hungry bears. You also have to watch out for the evil monkeys who are wearing masks and carrying signs that say "Save the Planet".
-New achievements: You can earn new achievements that are related to Earth Day. Some of them are "Go Green", "Eco-Friendly", "Earth Saver", and "Planet Protector".
-
-Temple Run 2 Earth Day Mod Apk is a fun and exciting way to enjoy Temple Run 2 while also supporting a good cause. It is a tribute to the planet Earth and its beauty and diversity.
- How to download and install Temple Run 2 Earth Day Mod Apk?
-If you want to try out Temple Run 2 Earth Day Mod Apk, you have to follow these steps:
-temple run 2 earth day hack apk
-temple run 2 earth day unlimited coins and gems
-temple run 2 earth day modded version download
-temple run 2 earth day cheat apk
-temple run 2 earth day free characters and maps
-temple run 2 earth day latest mod apk
-temple run 2 earth day apk + obb
-temple run 2 earth day mod menu apk
-temple run 2 earth day cracked apk
-temple run 2 earth day premium apk
-temple run 2 earth day full unlocked apk
-temple run 2 earth day mega mod apk
-temple run 2 earth day no ads apk
-temple run 2 earth day mod apk android 1
-temple run 2 earth day mod apk revdl
-temple run 2 earth day mod apk rexdl
-temple run 2 earth day mod apk happymod
-temple run 2 earth day mod apk apkpure
-temple run 2 earth day mod apk offline
-temple run 2 earth day mod apk online
-temple run 2 earth day mod apk for pc
-temple run 2 earth day mod apk for ios
-temple run 2 earth day mod apk for windows phone
-temple run 2 earth day mod apk for blackberry
-temple run 2 earth day mod apk for fire tablet
-temple run 2 earth day mod apk unlimited everything
-temple run 2 earth day mod apk all features unlocked
-temple run 2 earth day mod apk high score
-temple run 2 earth day mod apk unlimited lives
-temple run 2 earth day mod apk god mode
-temple run 2 earth day mod apk anti ban
-temple run 2 earth day mod apk no root
-temple run 2 earth day mod apk no survey
-temple run 2 earth day mod apk no verification
-temple run 2 earth day mod apk no password
-temple run 2 earth day mod apk direct download link
-temple run 2 earth day mod apk mirror link
-temple run 2 earth day mod apk mediafire link
-temple run 2 earth day mod apk google drive link
-temple run 2 earth day mod apk dropbox link
-how to download and install temple run 2 earth day mod apk
-how to play temple run 2 earth day mod apk
-how to update temple run 2 earth day mod apk
-how to uninstall temple run 2 earth day mod apk
-how to backup and restore temple run 2 earth day mod apk
-how to fix errors in temple run 2 earth day mod apk
-how to get more coins and gems in temple run 2 earth day mod apk
-how to unlock all characters and maps in temple run 2 earth day mod apk
-
-Make sure you have Temple Run 2 installed on your device. If not, you can download it from Google Play Store or from this link: .
-Download Temple Run 2 Earth Day Mod Apk from this link: . This is a trusted source that has been verified by many users.
-Enable unknown sources on your device settings. This will allow you to install applications from sources other than Google Play Store.
-Locate the downloaded file on your device storage and tap on it to install it.
-Launch the game and enjoy the modded features.
-
- How to play Temple Run 2 Earth Day Mod Apk?
-The gameplay of Temple Run 2 Earth Day Mod Apk is similar to the original game. You have to run as far as you can while avoiding obstacles and collecting coins and gems. However, there are some differences that you should know:
-
-You can select any character you want from the menu before starting the game. You can also change your character during the game by tapping on the character icon on the top left corner of the screen.
-You can access the shop from the menu or during the game by tapping on the shop icon on the top right corner of the screen. You can use your unlimited coins and gems to buy power-ups, abilities, outfits, wallpapers, and more.
-You can activate special powers for each You can activate special powers for each character by filling up the power meter on the bottom left corner of the screen. You can fill up the power meter by collecting coins, gems, or power-ups. Each character has a different power that can help you in the game. For example, Earth Warrior can create a shield that protects him from obstacles, Eco Ranger can summon a bird that flies him over obstacles, Green Guardian can create a trail of flowers that boosts his speed, and Nature Lover can attract animals that help him collect coins and gems.
-You can also use the Earth Day button on the bottom right corner of the screen to activate a special mode that changes the environment and the obstacles. In this mode, you can see more greenery, wildlife, and Earth Day signs. You can also collect more coins and gems and earn more points. However, you have to be careful of the new obstacles such as falling rocks, swinging vines, angry bees, and hungry bears.
-
-The goal of Temple Run 2 Earth Day Mod Apk is to run as far as you can and score as high as you can. You can also compete with your friends and other players online by connecting your game to Facebook or Google Play Games. You can also share your screenshots and videos of your gameplay on social media platforms such as Instagram, Twitter, or YouTube.
- What are the advantages and disadvantages of Temple Run 2 Earth Day Mod Apk?
-Temple Run 2 Earth Day Mod Apk has some advantages and disadvantages that you should consider before using it. Here are some of them:
- Advantages
-
-It is free to download and use. You do not have to pay any money to enjoy the modded features of the game.
-It is fun and exciting. You can experience a new and different version of Temple Run 2 that has more features, options, and challenges.
-It is easy to use. You do not need any technical skills or knowledge to download and install the mod apk. You just have to follow the simple steps that we have provided above.
-It is educational and inspirational. You can learn more about Earth Day and its significance while playing the game. You can also get inspired to take action for environmental protection and conservation.
-
- Disadvantages
-
-It is risky and unreliable. You may encounter some problems or issues while using the mod apk such as malware infection, data theft, legal issues, or ethical concerns. You may also lose your progress or account if the mod apk is detected or banned by the game developers or authorities.
-It is unfair and unethical. You may gain an unfair advantage over other players who are playing the original game without any modded features. You may also violate the terms and conditions of the game developers or owners by using an unauthorized modification of their application.
-It is temporary and unstable. You may not be able to use the mod apk for a long time as it may become outdated or incompatible with the updates or changes of the original game. You may also lose some of the modded features or functions if they are fixed or removed by the game developers or owners.
-
- Conclusion
-Temple Run 2 Earth Day Mod Apk is a modded version of Temple Run 2 that celebrates Earth Day. It has some unique features that are not available in the original game such as unlimited coins and gems, unlocked characters, new environment, new obstacles, new achievements, and more. It is a fun and exciting way to enjoy Temple Run 2 while also supporting a good cause.
-However, Temple Run 2 Earth Day Mod Apk also has some risks and drawbacks that you should be aware of before using it such as malware infection, data theft, legal issues, ethical concerns, unfair advantage, violation of terms and conditions, outdatedness, incompatibility, instability, and more. Therefore, you should always be careful when downloading and installing a mod apk and only use trusted sources.
-We hope this article has helped you learn more about Temple Run 2 Earth Day Mod Apk and its features, benefits, and disadvantages. If you have any questions or comments about this topic, feel free to leave them below.
- FAQs
-Here are some frequently asked questions and their answers about Temple Run 2 Earth Day Mod Apk:
-
-Is Temple Run 2 Earth Day Mod Apk safe to use?
-Temple Run 2 Earth Day Mod Apk is not completely safe to use as it may contain some viruses, spyware, or other malicious software that can harm your device or steal your personal information. Therefore, you should always scan the file before
Therefore, you should always scan the file before installing it and use a reliable antivirus or anti-malware program on your device. You should also backup your data and progress before using the mod apk in case something goes wrong.
- Is Temple Run 2 Earth Day Mod Apk legal to use?
-Temple Run 2 Earth Day Mod Apk is not legal to use as it violates the intellectual property rights of the game developers or owners. By using a mod apk, you are essentially stealing their work and modifying it without their permission. This can result in legal actions or penalties from the game developers or owners or from the authorities. Therefore, you should respect the rights and efforts of the game developers or owners and only use the original game or the official updates or changes.
- Is Temple Run 2 Earth Day Mod Apk ethical to use?
-Temple Run 2 Earth Day Mod Apk is not ethical to use as it goes against the spirit and purpose of Earth Day. By using a mod apk, you are not only cheating in the game but also disrespecting the planet and its environment. You are also depriving yourself of the challenge and satisfaction of playing the game as it was intended. Therefore, you should honor the message and mission of Earth Day and play the game in a fair and honest way.
- How can I update Temple Run 2 Earth Day Mod Apk?
-Temple Run 2 Earth Day Mod Apk may not be compatible with the latest version of Temple Run 2 or with the new features or changes that are added by the game developers or owners. Therefore, you may need to update your mod apk to keep up with the original game. However, updating a mod apk is not easy or guaranteed as it depends on the availability and reliability of the mod apk developer or source. You may have to wait for a long time or search for a new mod apk that works with the updated version of Temple Run 2. You may also lose some of the modded features or functions if they are fixed or removed by the game developers or owners.
- Where can I find more information about Temple Run 2 Earth Day Mod Apk?
-If you want to learn more about Temple Run 2 Earth Day Mod Apk, you can visit some of the websites that provide reviews, tutorials, videos, screenshots, or downloads of the mod apk. However, you should be careful when visiting these websites as some of them may contain fake, misleading, or harmful information or content. You should also check the comments, ratings, and feedbacks of other users who have used the mod apk before downloading or installing it.
- 197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Aisi Lagi Lagan by Anup Jalota - The Ultimate Bhajan Song Download.md b/spaces/contluForse/HuggingGPT/assets/Aisi Lagi Lagan by Anup Jalota - The Ultimate Bhajan Song Download.md
deleted file mode 100644
index faa4e9a3cbee991b1170ea05c392da8af0936a71..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Aisi Lagi Lagan by Anup Jalota - The Ultimate Bhajan Song Download.md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-Copyright/DMCA: We DO NOT own any copyrights of this Mp3 Song. This Aisi Lagi lagan Meera Ho Gayi Magan was either uploaded by our users @Pagal Songs or it must be readily available on various places on public domains as FREE download. If you want this Aisi Lagi lagan Meera Ho Gayi Magan to be removed or if it is copyright infringement, do drop us an email at [email protected] and this will be taken down within 24 hours!
-Aisi Lagi Lagan Full Song Download DOWNLOAD 🔗 https://ssurll.com/2uzyCR
-Related Tags: Aisi Lagi Lagan, Aisi Lagi Lagan song, Aisi Lagi Lagan MP3 song, Aisi Lagi Lagan MP3, download Aisi Lagi Lagan song, Aisi Lagi Lagan song, Spiritual Songs For Makar Sankranti Aisi Lagi Lagan song, Aisi Lagi Lagan song by Anup Jalota, Aisi Lagi Lagan song download, download Aisi Lagi Lagan MP3 song
-Anup Jalota - Bhajan Icon is a Hindi devotional album released on 2016.Music of Anup Jalota - Bhajan Icon songs are composed. Anup Jalota - Bhajan Icon album has 15 songs sung by Anup Jalota.Listen to all songs in high quality & download Anup Jalota - Bhajan Icon songs on Raaga.com.
-This website offers unlimited downloading of youtube music and Mp3 juice song free download in HD quality. You can also click "PLAY" to play the audio file before you download it. Mp3juices take only 2-5 seconds to convert and download audio files.
-
-It is easy to download mp3 juice by visiting the website and entering the song name into the search box or pasting the URL. Select one search result and then convert it to audio by clicking the download button. Finally, hit the Download button to get the audio file at high speeds.
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/contluForse/HuggingGPT/assets/Analisis De La Obra Literaria Dos Pesos De Agua.md b/spaces/contluForse/HuggingGPT/assets/Analisis De La Obra Literaria Dos Pesos De Agua.md
deleted file mode 100644
index 73460753e8a53fe5bfde3617c6fbd06902b44e42..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Analisis De La Obra Literaria Dos Pesos De Agua.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Analisis De La Obra Literaria Dos Pesos De Agua DOWNLOAD ✔ https://ssurll.com/2uzxHA
-
-literary analysis suggested in the course, and focus their attention on ... ¿Cómo se representan en obras literarias de distintos perÃodos y diversas culturas las ... “Mujer negraâ€; Allende, “Dos palabras†(La tradición y la ruptura). â–« Sor Juana ... “Peso ancestralâ€. Alfonsina ... y el mar forma, por las llamas, agua. ¡Maldito el ... 1fdad05405
-
-
-
diff --git a/spaces/contluForse/HuggingGPT/assets/Extreme Surebet Money Maker 9.6.0 Serial Key Keygen __FULL__.md b/spaces/contluForse/HuggingGPT/assets/Extreme Surebet Money Maker 9.6.0 Serial Key Keygen __FULL__.md
deleted file mode 100644
index e1b2f6c25d3eb4aec5215041e96b848ea52f2d1c..0000000000000000000000000000000000000000
--- a/spaces/contluForse/HuggingGPT/assets/Extreme Surebet Money Maker 9.6.0 Serial Key Keygen __FULL__.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-Roulette7crack.com Final Fantasy XIII-2 1.5.3.3 Crack MAC OS X Incl Serial Key Ultra To Hack ini PC Download.rar download-wimax-camera1-wimax-camera-bridge-exploit-for-firmware-upload-box-crack Download This Is The Right! Full Crack Steam Generalfreeware.exe crack Download Direct Link Benzedrine Barbiturate Crkake Incl.rar NekosoftIschia 4 Crack 7 0 Novembre 2017 V3.0.4.r2 iTunes with Series Pass Download fsweekup-win 7-in-1-mac-crack Platinum-fraggy4-crack FullCrackKiller-2-Mac-Bypass-with-Keygen-ARIJINI-Mac-Crack-FTp-Server-RELEASE.rar Machine Cracker Crack 2 torrent software freehold casino road oanenburg west eindhoven Osado TCP/IP Download wp 7.0 Forum Pascal 2.0 For Windows Full Version.rar gta 5 crack trainer download 3d game engine open xml format ini FreeformXd Download Cracked G Spot game Download World Of Warcraft Wep Test Mode Emulator.exe The WinNTD Make USB 7.9 version 3.4.4.2 Download Incl Crack 9 charachter tool hack spywarestrategies.in wps browser Startpage Download Haiku 18 Crack v03 Incl Cracked Codered Acpcad.dll Download World Of Warcraft Wep Test Mode Emulator.exe Torrent.gf The Root Password Stacks Mac Serial Keygen Crack Fortigate F-Secure 2004 Download 1.2.
-Extreme surebet money maker 9.6.0 Serial Key keygen DOWNLOAD >>> https://ssurll.com/2uzy8K
-download torrent video file importrwarehike 4.6 Mac OSX 101562 Download Home windows 2008 professional serial keygen legal music download sites for windows 8 download full game edgetechs.com The latest ACER (ATA2SE: ATA Bridge Secondary Entry) has been the greatest onslaught against Windows, even if you’ve utilized it, and a way to cease Windows from talking with other drives while being used is by clearing the ATA2SE: ATA Bridge Secondary Entry Handle, and that’s exactly what this information is about.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/lama/saicinpainting/training/losses/constants.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/lama/saicinpainting/training/losses/constants.py
deleted file mode 100644
index ae3e5e151342232be8e2c2a77fe6fd5798dc2a8c..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/lama/saicinpainting/training/losses/constants.py
+++ /dev/null
@@ -1,152 +0,0 @@
-weights = {"ade20k":
- [6.34517766497462,
- 9.328358208955224,
- 11.389521640091116,
- 16.10305958132045,
- 20.833333333333332,
- 22.22222222222222,
- 25.125628140703515,
- 43.29004329004329,
- 50.5050505050505,
- 54.6448087431694,
- 55.24861878453038,
- 60.24096385542168,
- 62.5,
- 66.2251655629139,
- 84.74576271186442,
- 90.90909090909092,
- 91.74311926605505,
- 96.15384615384616,
- 96.15384615384616,
- 97.08737864077669,
- 102.04081632653062,
- 135.13513513513513,
- 149.2537313432836,
- 153.84615384615384,
- 163.93442622950818,
- 166.66666666666666,
- 188.67924528301887,
- 192.30769230769232,
- 217.3913043478261,
- 227.27272727272725,
- 227.27272727272725,
- 227.27272727272725,
- 303.03030303030306,
- 322.5806451612903,
- 333.3333333333333,
- 370.3703703703703,
- 384.61538461538464,
- 416.6666666666667,
- 416.6666666666667,
- 434.7826086956522,
- 434.7826086956522,
- 454.5454545454545,
- 454.5454545454545,
- 500.0,
- 526.3157894736842,
- 526.3157894736842,
- 555.5555555555555,
- 555.5555555555555,
- 555.5555555555555,
- 555.5555555555555,
- 555.5555555555555,
- 555.5555555555555,
- 555.5555555555555,
- 588.2352941176471,
- 588.2352941176471,
- 588.2352941176471,
- 588.2352941176471,
- 588.2352941176471,
- 666.6666666666666,
- 666.6666666666666,
- 666.6666666666666,
- 666.6666666666666,
- 714.2857142857143,
- 714.2857142857143,
- 714.2857142857143,
- 714.2857142857143,
- 714.2857142857143,
- 769.2307692307693,
- 769.2307692307693,
- 769.2307692307693,
- 833.3333333333334,
- 833.3333333333334,
- 833.3333333333334,
- 833.3333333333334,
- 909.090909090909,
- 1000.0,
- 1111.111111111111,
- 1111.111111111111,
- 1111.111111111111,
- 1111.111111111111,
- 1111.111111111111,
- 1250.0,
- 1250.0,
- 1250.0,
- 1250.0,
- 1250.0,
- 1428.5714285714287,
- 1428.5714285714287,
- 1428.5714285714287,
- 1428.5714285714287,
- 1428.5714285714287,
- 1428.5714285714287,
- 1428.5714285714287,
- 1666.6666666666667,
- 1666.6666666666667,
- 1666.6666666666667,
- 1666.6666666666667,
- 1666.6666666666667,
- 1666.6666666666667,
- 1666.6666666666667,
- 1666.6666666666667,
- 1666.6666666666667,
- 1666.6666666666667,
- 1666.6666666666667,
- 2000.0,
- 2000.0,
- 2000.0,
- 2000.0,
- 2000.0,
- 2000.0,
- 2000.0,
- 2000.0,
- 2000.0,
- 2000.0,
- 2000.0,
- 2000.0,
- 2000.0,
- 2000.0,
- 2000.0,
- 2000.0,
- 2000.0,
- 2500.0,
- 2500.0,
- 2500.0,
- 2500.0,
- 2500.0,
- 2500.0,
- 2500.0,
- 2500.0,
- 2500.0,
- 2500.0,
- 2500.0,
- 2500.0,
- 2500.0,
- 3333.3333333333335,
- 3333.3333333333335,
- 3333.3333333333335,
- 3333.3333333333335,
- 3333.3333333333335,
- 3333.3333333333335,
- 3333.3333333333335,
- 3333.3333333333335,
- 3333.3333333333335,
- 3333.3333333333335,
- 3333.3333333333335,
- 3333.3333333333335,
- 3333.3333333333335,
- 5000.0,
- 5000.0,
- 5000.0]
-}
\ No newline at end of file
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/roi_pool.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/roi_pool.py
deleted file mode 100644
index d339d8f2941eabc1cbe181a9c6c5ab5ff4ff4e5f..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmcv/ops/roi_pool.py
+++ /dev/null
@@ -1,86 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch
-import torch.nn as nn
-from torch.autograd import Function
-from torch.autograd.function import once_differentiable
-from torch.nn.modules.utils import _pair
-
-from ..utils import ext_loader
-
-ext_module = ext_loader.load_ext('_ext',
- ['roi_pool_forward', 'roi_pool_backward'])
-
-
-class RoIPoolFunction(Function):
-
- @staticmethod
- def symbolic(g, input, rois, output_size, spatial_scale):
- return g.op(
- 'MaxRoiPool',
- input,
- rois,
- pooled_shape_i=output_size,
- spatial_scale_f=spatial_scale)
-
- @staticmethod
- def forward(ctx, input, rois, output_size, spatial_scale=1.0):
- ctx.output_size = _pair(output_size)
- ctx.spatial_scale = spatial_scale
- ctx.input_shape = input.size()
-
- assert rois.size(1) == 5, 'RoI must be (idx, x1, y1, x2, y2)!'
-
- output_shape = (rois.size(0), input.size(1), ctx.output_size[0],
- ctx.output_size[1])
- output = input.new_zeros(output_shape)
- argmax = input.new_zeros(output_shape, dtype=torch.int)
-
- ext_module.roi_pool_forward(
- input,
- rois,
- output,
- argmax,
- pooled_height=ctx.output_size[0],
- pooled_width=ctx.output_size[1],
- spatial_scale=ctx.spatial_scale)
-
- ctx.save_for_backward(rois, argmax)
- return output
-
- @staticmethod
- @once_differentiable
- def backward(ctx, grad_output):
- rois, argmax = ctx.saved_tensors
- grad_input = grad_output.new_zeros(ctx.input_shape)
-
- ext_module.roi_pool_backward(
- grad_output,
- rois,
- argmax,
- grad_input,
- pooled_height=ctx.output_size[0],
- pooled_width=ctx.output_size[1],
- spatial_scale=ctx.spatial_scale)
-
- return grad_input, None, None, None
-
-
-roi_pool = RoIPoolFunction.apply
-
-
-class RoIPool(nn.Module):
-
- def __init__(self, output_size, spatial_scale=1.0):
- super(RoIPool, self).__init__()
-
- self.output_size = _pair(output_size)
- self.spatial_scale = float(spatial_scale)
-
- def forward(self, input, rois):
- return roi_pool(input, rois, self.output_size, self.spatial_scale)
-
- def __repr__(self):
- s = self.__class__.__name__
- s += f'(output_size={self.output_size}, '
- s += f'spatial_scale={self.spatial_scale})'
- return s
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/evaluation/cityscapes_evaluation.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/evaluation/cityscapes_evaluation.py
deleted file mode 100644
index 19b1cb779e5f493cf75c8e6913a90da5c174735f..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/evaluation/cityscapes_evaluation.py
+++ /dev/null
@@ -1,201 +0,0 @@
-# ------------------------------------------------------------------------------
-# Reference: https://github.com/facebookresearch/detectron2/blob/main/detectron2/evaluation/cityscapes_evaluation.py
-# Modified by Jitesh Jain (https://github.com/praeclarumjj3)
-# ------------------------------------------------------------------------------
-
-import glob
-import logging
-import numpy as np
-import os
-import tempfile
-from collections import OrderedDict
-import torch
-from PIL import Image
-
-from annotator.oneformer.detectron2.data import MetadataCatalog
-from annotator.oneformer.detectron2.utils import comm
-from annotator.oneformer.detectron2.utils.file_io import PathManager
-
-from .evaluator import DatasetEvaluator
-
-
-class CityscapesEvaluator(DatasetEvaluator):
- """
- Base class for evaluation using cityscapes API.
- """
-
- def __init__(self, dataset_name):
- """
- Args:
- dataset_name (str): the name of the dataset.
- It must have the following metadata associated with it:
- "thing_classes", "gt_dir".
- """
- self._metadata = MetadataCatalog.get(dataset_name)
- self._cpu_device = torch.device("cpu")
- self._logger = logging.getLogger(__name__)
-
- def reset(self):
- self._working_dir = tempfile.TemporaryDirectory(prefix="cityscapes_eval_")
- self._temp_dir = self._working_dir.name
- # All workers will write to the same results directory
- # TODO this does not work in distributed training
- assert (
- comm.get_local_size() == comm.get_world_size()
- ), "CityscapesEvaluator currently do not work with multiple machines."
- self._temp_dir = comm.all_gather(self._temp_dir)[0]
- if self._temp_dir != self._working_dir.name:
- self._working_dir.cleanup()
- self._logger.info(
- "Writing cityscapes results to temporary directory {} ...".format(self._temp_dir)
- )
-
-
-class CityscapesInstanceEvaluator(CityscapesEvaluator):
- """
- Evaluate instance segmentation results on cityscapes dataset using cityscapes API.
-
- Note:
- * It does not work in multi-machine distributed training.
- * It contains a synchronization, therefore has to be used on all ranks.
- * Only the main process runs evaluation.
- """
-
- def process(self, inputs, outputs):
- from cityscapesscripts.helpers.labels import name2label
-
- for input, output in zip(inputs, outputs):
- file_name = input["file_name"]
- basename = os.path.splitext(os.path.basename(file_name))[0]
- pred_txt = os.path.join(self._temp_dir, basename + "_pred.txt")
-
- if "instances" in output:
- output = output["instances"].to(self._cpu_device)
- num_instances = len(output)
- with open(pred_txt, "w") as fout:
- for i in range(num_instances):
- pred_class = output.pred_classes[i]
- classes = self._metadata.stuff_classes[pred_class]
- class_id = name2label[classes].id
- score = output.scores[i]
- mask = output.pred_masks[i].numpy().astype("uint8")
- png_filename = os.path.join(
- self._temp_dir, basename + "_{}_{}.png".format(i, classes)
- )
-
- Image.fromarray(mask * 255).save(png_filename)
- fout.write(
- "{} {} {}\n".format(os.path.basename(png_filename), class_id, score)
- )
- else:
- # Cityscapes requires a prediction file for every ground truth image.
- with open(pred_txt, "w") as fout:
- pass
-
- def evaluate(self):
- """
- Returns:
- dict: has a key "segm", whose value is a dict of "AP" and "AP50".
- """
- comm.synchronize()
- if comm.get_rank() > 0:
- return
- import cityscapesscripts.evaluation.evalInstanceLevelSemanticLabeling as cityscapes_eval
-
- self._logger.info("Evaluating results under {} ...".format(self._temp_dir))
-
- # set some global states in cityscapes evaluation API, before evaluating
- cityscapes_eval.args.predictionPath = os.path.abspath(self._temp_dir)
- cityscapes_eval.args.predictionWalk = None
- cityscapes_eval.args.JSONOutput = False
- cityscapes_eval.args.colorized = False
- cityscapes_eval.args.gtInstancesFile = os.path.join(self._temp_dir, "gtInstances.json")
-
- # These lines are adopted from
- # https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/evalInstanceLevelSemanticLabeling.py # noqa
- gt_dir = PathManager.get_local_path(self._metadata.gt_dir)
- groundTruthImgList = glob.glob(os.path.join(gt_dir, "*", "*_gtFine_instanceIds.png"))
- assert len(
- groundTruthImgList
- ), "Cannot find any ground truth images to use for evaluation. Searched for: {}".format(
- cityscapes_eval.args.groundTruthSearch
- )
- predictionImgList = []
- for gt in groundTruthImgList:
- predictionImgList.append(cityscapes_eval.getPrediction(gt, cityscapes_eval.args))
- results = cityscapes_eval.evaluateImgLists(
- predictionImgList, groundTruthImgList, cityscapes_eval.args
- )["averages"]
-
- ret = OrderedDict()
- ret["segm"] = {"AP": results["allAp"] * 100, "AP50": results["allAp50%"] * 100}
- self._working_dir.cleanup()
- return ret
-
-
-class CityscapesSemSegEvaluator(CityscapesEvaluator):
- """
- Evaluate semantic segmentation results on cityscapes dataset using cityscapes API.
-
- Note:
- * It does not work in multi-machine distributed training.
- * It contains a synchronization, therefore has to be used on all ranks.
- * Only the main process runs evaluation.
- """
-
- def process(self, inputs, outputs):
- from cityscapesscripts.helpers.labels import trainId2label
-
- for input, output in zip(inputs, outputs):
- file_name = input["file_name"]
- basename = os.path.splitext(os.path.basename(file_name))[0]
- pred_filename = os.path.join(self._temp_dir, basename + "_pred.png")
-
- output = output["sem_seg"].argmax(dim=0).to(self._cpu_device).numpy()
- pred = 255 * np.ones(output.shape, dtype=np.uint8)
- for train_id, label in trainId2label.items():
- if label.ignoreInEval:
- continue
- pred[output == train_id] = label.id
- Image.fromarray(pred).save(pred_filename)
-
- def evaluate(self):
- comm.synchronize()
- if comm.get_rank() > 0:
- return
- # Load the Cityscapes eval script *after* setting the required env var,
- # since the script reads CITYSCAPES_DATASET into global variables at load time.
- import cityscapesscripts.evaluation.evalPixelLevelSemanticLabeling as cityscapes_eval
-
- self._logger.info("Evaluating results under {} ...".format(self._temp_dir))
-
- # set some global states in cityscapes evaluation API, before evaluating
- cityscapes_eval.args.predictionPath = os.path.abspath(self._temp_dir)
- cityscapes_eval.args.predictionWalk = None
- cityscapes_eval.args.JSONOutput = False
- cityscapes_eval.args.colorized = False
-
- # These lines are adopted from
- # https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/evaluation/evalPixelLevelSemanticLabeling.py # noqa
- gt_dir = PathManager.get_local_path(self._metadata.gt_dir)
- groundTruthImgList = glob.glob(os.path.join(gt_dir, "*", "*_gtFine_labelIds.png"))
- assert len(
- groundTruthImgList
- ), "Cannot find any ground truth images to use for evaluation. Searched for: {}".format(
- cityscapes_eval.args.groundTruthSearch
- )
- predictionImgList = []
- for gt in groundTruthImgList:
- predictionImgList.append(cityscapes_eval.getPrediction(cityscapes_eval.args, gt))
- results = cityscapes_eval.evaluateImgLists(
- predictionImgList, groundTruthImgList, cityscapes_eval.args
- )
- ret = OrderedDict()
- ret["sem_seg"] = {
- "IoU": 100.0 * results["averageScoreClasses"],
- "iIoU": 100.0 * results["averageScoreInstClasses"],
- "IoU_sup": 100.0 * results["averageScoreCategories"],
- "iIoU_sup": 100.0 * results["averageScoreInstCategories"],
- }
- self._working_dir.cleanup()
- return ret
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/src/cpu/ms_deform_attn_cpu.h b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/src/cpu/ms_deform_attn_cpu.h
deleted file mode 100644
index 51bb27e9ee828f967e8aa854c2d55574040c6d7e..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/src/cpu/ms_deform_attn_cpu.h
+++ /dev/null
@@ -1,38 +0,0 @@
-/*!
-**************************************************************************************************
-* Deformable DETR
-* Copyright (c) 2020 SenseTime. All Rights Reserved.
-* Licensed under the Apache License, Version 2.0 [see LICENSE for details]
-**************************************************************************************************
-* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
-**************************************************************************************************
-*/
-
-/*!
-* Copyright (c) Facebook, Inc. and its affiliates.
-* Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
-*/
-
-#pragma once
-#include
-
-at::Tensor
-ms_deform_attn_cpu_forward(
- const at::Tensor &value,
- const at::Tensor &spatial_shapes,
- const at::Tensor &level_start_index,
- const at::Tensor &sampling_loc,
- const at::Tensor &attn_weight,
- const int im2col_step);
-
-std::vector
-ms_deform_attn_cpu_backward(
- const at::Tensor &value,
- const at::Tensor &spatial_shapes,
- const at::Tensor &level_start_index,
- const at::Tensor &sampling_loc,
- const at::Tensor &attn_weight,
- const at::Tensor &grad_output,
- const int im2col_step);
-
-
diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/configs/_base_/datasets/pascal_voc12.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/configs/_base_/datasets/pascal_voc12.py
deleted file mode 100644
index ba1d42d0c5781f56dc177d860d856bb34adce555..0000000000000000000000000000000000000000
--- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/configs/_base_/datasets/pascal_voc12.py
+++ /dev/null
@@ -1,57 +0,0 @@
-# dataset settings
-dataset_type = 'PascalVOCDataset'
-data_root = 'data/VOCdevkit/VOC2012'
-img_norm_cfg = dict(
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
-crop_size = (512, 512)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(type='LoadAnnotations'),
- dict(type='Resize', img_scale=(2048, 512), ratio_range=(0.5, 2.0)),
- dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
- dict(type='RandomFlip', prob=0.5),
- dict(type='PhotoMetricDistortion'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),
- dict(type='DefaultFormatBundle'),
- dict(type='Collect', keys=['img', 'gt_semantic_seg']),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(2048, 512),
- # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75],
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ])
-]
-data = dict(
- samples_per_gpu=4,
- workers_per_gpu=4,
- train=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='JPEGImages',
- ann_dir='SegmentationClass',
- split='ImageSets/Segmentation/train.txt',
- pipeline=train_pipeline),
- val=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='JPEGImages',
- ann_dir='SegmentationClass',
- split='ImageSets/Segmentation/val.txt',
- pipeline=test_pipeline),
- test=dict(
- type=dataset_type,
- data_root=data_root,
- img_dir='JPEGImages',
- ann_dir='SegmentationClass',
- split='ImageSets/Segmentation/val.txt',
- pipeline=test_pipeline))
diff --git a/spaces/crashedice/signify/SOURCE/yolo_files/models/yolo.py b/spaces/crashedice/signify/SOURCE/yolo_files/models/yolo.py
deleted file mode 100644
index b97c8b960a6f1f2aa30b25f91827d8b428da3d3c..0000000000000000000000000000000000000000
--- a/spaces/crashedice/signify/SOURCE/yolo_files/models/yolo.py
+++ /dev/null
@@ -1,304 +0,0 @@
-# YOLOv5 YOLO-specific modules
-
-import argparse
-import logging
-import sys
-from copy import deepcopy
-from pathlib import Path
-
-sys.path.append(Path(__file__).parent.parent.absolute().__str__()) # to run '$ python *.py' files in subdirectories
-logger = logging.getLogger(__name__)
-
-from SOURCE.yolo_files.models.common import *
-from SOURCE.yolo_files.models.experimental import *
-from SOURCE.yolo_files.utils.autoanchor import check_anchor_order
-from SOURCE.yolo_files.utils.general import make_divisible, check_file, set_logging
-from SOURCE.yolo_files.utils.torch_utils import time_synchronized, fuse_conv_and_bn, model_info, scale_img, initialize_weights, \
- select_device, copy_attr
-
-try:
- import thop # for FLOPS computation
-except ImportError:
- thop = None
-
-
-class Detect(nn.Module):
- stride = None # strides computed during build
- onnx_dynamic = False # ONNX export parameter
-
- def __init__(self, nc=80, anchors=(), ch=(), inplace=True): # detection layer
- super(Detect, self).__init__()
- self.nc = nc # number of classes
- self.no = nc + 5 # number of outputs per anchor
- self.nl = len(anchors) # number of detection layers
- self.na = len(anchors[0]) // 2 # number of anchors
- self.grid = [torch.zeros(1)] * self.nl # init grid
- a = torch.tensor(anchors).float().view(self.nl, -1, 2)
- self.register_buffer('anchors', a) # shape(nl,na,2)
- self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2)) # shape(nl,1,na,1,1,2)
- self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch) # output conv
- self.inplace = inplace # use in-place ops (e.g. slice assignment)
-
- def forward(self, x):
- # x = x.copy() # for profiling
- z = [] # inference output
- for i in range(self.nl):
- x[i] = self.m[i](x[i]) # conv
- bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
- x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
-
- if not self.training: # inference
- if self.grid[i].shape[2:4] != x[i].shape[2:4] or self.onnx_dynamic:
- self.grid[i] = self._make_grid(nx, ny).to(x[i].device)
-
- y = x[i].sigmoid()
- if self.inplace:
- y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy
- y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
- else: # for YOLOv5 on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953
- xy = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy
- wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i].view(1, self.na, 1, 1, 2) # wh
- y = torch.cat((xy, wh, y[..., 4:]), -1)
- z.append(y.view(bs, -1, self.no))
-
- return x if self.training else (torch.cat(z, 1), x)
-
- @staticmethod
- def _make_grid(nx=20, ny=20):
- yv, xv = torch.meshgrid([torch.arange(ny), torch.arange(nx)])
- return torch.stack((xv, yv), 2).view((1, 1, ny, nx, 2)).float()
-
-
-class Model(nn.Module):
- def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None): # model, input channels, number of classes
- super(Model, self).__init__()
- if isinstance(cfg, dict):
- self.yaml = cfg # model dict
- else: # is *.yaml
- import yaml # for torch hub
- self.yaml_file = Path(cfg).name
- with open(cfg) as f:
- self.yaml = yaml.safe_load(f) # model dict
-
- # Define model
- ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels
- if nc and nc != self.yaml['nc']:
- logger.info(f"Overriding model.yaml nc={self.yaml['nc']} with nc={nc}")
- self.yaml['nc'] = nc # override yaml value
- if anchors:
- logger.info(f'Overriding model.yaml anchors with anchors={anchors}')
- self.yaml['anchors'] = round(anchors) # override yaml value
- self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist
- self.names = [str(i) for i in range(self.yaml['nc'])] # default names
- self.inplace = self.yaml.get('inplace', True)
- # logger.info([x.shape for x in self.forward(torch.zeros(1, ch, 64, 64))])
-
- # Build strides, anchors
- m = self.model[-1] # Detect()
- if isinstance(m, Detect):
- s = 256 # 2x min stride
- m.inplace = self.inplace
- m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward
- m.anchors /= m.stride.view(-1, 1, 1)
- check_anchor_order(m)
- self.stride = m.stride
- self._initialize_biases() # only run once
- # logger.info('Strides: %s' % m.stride.tolist())
-
- # Init weights, biases
- initialize_weights(self)
- self.info()
- logger.info('')
-
- def forward(self, x, augment=False, profile=False):
- if augment:
- return self.forward_augment(x) # augmented inference, None
- else:
- return self.forward_once(x, profile) # single-scale inference, train
-
- def forward_augment(self, x):
- img_size = x.shape[-2:] # height, width
- s = [1, 0.83, 0.67] # scales
- f = [None, 3, None] # flips (2-ud, 3-lr)
- y = [] # outputs
- for si, fi in zip(s, f):
- xi = scale_img(x.flip(fi) if fi else x, si, gs=int(self.stride.max()))
- yi = self.forward_once(xi)[0] # forward
- # cv2.imwrite(f'img_{si}.jpg', 255 * xi[0].cpu().numpy().transpose((1, 2, 0))[:, :, ::-1]) # save
- yi = self._descale_pred(yi, fi, si, img_size)
- y.append(yi)
- return torch.cat(y, 1), None # augmented inference, train
-
- def forward_once(self, x, profile=False):
- y, dt = [], [] # outputs
- for m in self.model:
- if m.f != -1: # if not from previous layer
- x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f] # from earlier layers
-
- if profile:
- o = thop.profile(m, inputs=(x,), verbose=False)[0] / 1E9 * 2 if thop else 0 # FLOPS
- t = time_synchronized()
- for _ in range(10):
- _ = m(x)
- dt.append((time_synchronized() - t) * 100)
- if m == self.model[0]:
- logger.info(f"{'time (ms)':>10s} {'GFLOPS':>10s} {'params':>10s} {'module'}")
- logger.info(f'{dt[-1]:10.2f} {o:10.2f} {m.np:10.0f} {m.type}')
-
- x = m(x) # run
- y.append(x if m.i in self.save else None) # save output
-
- if profile:
- logger.info('%.1fms total' % sum(dt))
- return x
-
- def _descale_pred(self, p, flips, scale, img_size):
- # de-scale predictions following augmented inference (inverse operation)
- if self.inplace:
- p[..., :4] /= scale # de-scale
- if flips == 2:
- p[..., 1] = img_size[0] - p[..., 1] # de-flip ud
- elif flips == 3:
- p[..., 0] = img_size[1] - p[..., 0] # de-flip lr
- else:
- x, y, wh = p[..., 0:1] / scale, p[..., 1:2] / scale, p[..., 2:4] / scale # de-scale
- if flips == 2:
- y = img_size[0] - y # de-flip ud
- elif flips == 3:
- x = img_size[1] - x # de-flip lr
- p = torch.cat((x, y, wh, p[..., 4:]), -1)
- return p
-
- def _initialize_biases(self, cf=None): # initialize biases into Detect(), cf is class frequency
- # https://arxiv.org/abs/1708.02002 section 3.3
- # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1.
- m = self.model[-1] # Detect() module
- for mi, s in zip(m.m, m.stride): # from
- b = mi.bias.view(m.na, -1) # conv.bias(255) to (3,85)
- b.data[:, 4] += math.log(8 / (640 / s) ** 2) # obj (8 objects per 640 image)
- b.data[:, 5:] += math.log(0.6 / (m.nc - 0.99)) if cf is None else torch.log(cf / cf.sum()) # cls
- mi.bias = torch.nn.Parameter(b.view(-1), requires_grad=True)
-
- def _print_biases(self):
- m = self.model[-1] # Detect() module
- for mi in m.m: # from
- b = mi.bias.detach().view(m.na, -1).T # conv.bias(255) to (3,85)
- logger.info(
- ('%6g Conv2d.bias:' + '%10.3g' * 6) % (mi.weight.shape[1], *b[:5].mean(1).tolist(), b[5:].mean()))
-
- # def _print_weights(self):
- # for m in self.model.modules():
- # if type(m) is Bottleneck:
- # logger.info('%10.3g' % (m.w.detach().sigmoid() * 2)) # shortcut weights
-
- def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers
- logger.info('Fusing layers... ')
- for m in self.model.modules():
- if type(m) is Conv and hasattr(m, 'bn'):
- m.conv = fuse_conv_and_bn(m.conv, m.bn) # update conv
- delattr(m, 'bn') # remove batchnorm
- m.forward = m.fuseforward # update forward
- self.info()
- return self
-
- def nms(self, mode=True): # add or remove NMS module
- present = type(self.model[-1]) is NMS # last layer is NMS
- if mode and not present:
- logger.info('Adding NMS... ')
- m = NMS() # module
- m.f = -1 # from
- m.i = self.model[-1].i + 1 # index
- self.model.add_module(name='%s' % m.i, module=m) # add
- self.eval()
- elif not mode and present:
- logger.info('Removing NMS... ')
- self.model = self.model[:-1] # remove
- return self
-
- def autoshape(self): # add autoShape module
- logger.info('Adding autoShape... ')
- m = autoShape(self) # wrap model
- copy_attr(m, self, include=('yaml', 'nc', 'hyp', 'names', 'stride'), exclude=()) # copy attributes
- return m
-
- def info(self, verbose=False, img_size=640): # print model information
- model_info(self, verbose, img_size)
-
-
-def parse_model(d, ch): # model_dict, input_channels(3)
- logger.info('\n%3s%18s%3s%10s %-40s%-30s' % ('', 'from', 'n', 'params', 'module', 'arguments'))
- anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple']
- na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors # number of anchors
- no = na * (nc + 5) # number of outputs = anchors * (classes + 5)
-
- layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out
- for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args
- m = eval(m) if isinstance(m, str) else m # eval strings
- for j, a in enumerate(args):
- try:
- args[j] = eval(a) if isinstance(a, str) else a # eval strings
- except:
- pass
-
- n = max(round(n * gd), 1) if n > 1 else n # depth gain
- if m in [Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, DWConv, MixConv2d, Focus, CrossConv, BottleneckCSP,
- C3, C3TR]:
- c1, c2 = ch[f], args[0]
- if c2 != no: # if not output
- c2 = make_divisible(c2 * gw, 8)
-
- args = [c1, c2, *args[1:]]
- if m in [BottleneckCSP, C3, C3TR]:
- args.insert(2, n) # number of repeats
- n = 1
- elif m is nn.BatchNorm2d:
- args = [ch[f]]
- elif m is Concat:
- c2 = sum([ch[x] for x in f])
- elif m is Detect:
- args.append([ch[x] for x in f])
- if isinstance(args[1], int): # number of anchors
- args[1] = [list(range(args[1] * 2))] * len(f)
- elif m is Contract:
- c2 = ch[f] * args[0] ** 2
- elif m is Expand:
- c2 = ch[f] // args[0] ** 2
- else:
- c2 = ch[f]
-
- m_ = nn.Sequential(*[m(*args) for _ in range(n)]) if n > 1 else m(*args) # module
- t = str(m)[8:-2].replace('__main__.', '') # module type
- np = sum([x.numel() for x in m_.parameters()]) # number params
- m_.i, m_.f, m_.type, m_.np = i, f, t, np # attach index, 'from' index, type, number params
- logger.info('%3s%18s%3s%10.0f %-40s%-30s' % (i, f, n, np, t, args)) # print
- save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1) # append to savelist
- layers.append(m_)
- if i == 0:
- ch = []
- ch.append(c2)
- return nn.Sequential(*layers), sorted(save)
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser()
- parser.add_argument('--cfg', type=str, default='yolov5s.yaml', help='model.yaml')
- parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
- opt = parser.parse_args()
- opt.cfg = check_file(opt.cfg) # check file
- set_logging()
- device = select_device(opt.device)
-
- # Create model
- model = Model(opt.cfg).to(device)
- model.train()
-
- # Profile
- # img = torch.rand(8 if torch.cuda.is_available() else 1, 3, 320, 320).to(device)
- # y = model(img, profile=True)
-
- # Tensorboard (not working https://github.com/ultralytics/yolov5/issues/2898)
- # from torch.utils.tensorboard import SummaryWriter
- # tb_writer = SummaryWriter('.')
- # logger.info("Run 'tensorboard --logdir=models' to view tensorboard at http://localhost:6006/")
- # tb_writer.add_graph(torch.jit.trace(model, img, strict=False), []) # add model graph
- # tb_writer.add_image('test', img[0], dataformats='CWH') # add model to tensorboard
diff --git a/spaces/dantosxd/gorilla-llm-gorilla-mpt-7b-hf-v0/README.md b/spaces/dantosxd/gorilla-llm-gorilla-mpt-7b-hf-v0/README.md
deleted file mode 100644
index 899d81a183a0d37a806858be6f55eda49c95d2fd..0000000000000000000000000000000000000000
--- a/spaces/dantosxd/gorilla-llm-gorilla-mpt-7b-hf-v0/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Gorilla Llm Gorilla Mpt 7b Hf V0
-emoji: 🌍
-colorFrom: purple
-colorTo: indigo
-sdk: gradio
-sdk_version: 3.34.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/datasciencedojo/Describe-Dataset/app.py b/spaces/datasciencedojo/Describe-Dataset/app.py
deleted file mode 100644
index c44a96ef53d0648c1cee86c73e8008f83045d067..0000000000000000000000000000000000000000
--- a/spaces/datasciencedojo/Describe-Dataset/app.py
+++ /dev/null
@@ -1,159 +0,0 @@
-import gradio as gr
-import os
-import pandas as pd
-import seaborn as sns
-import matplotlib.pyplot as plt
-
-def dataset_change(dataset):
- df = pd.read_csv(dataset.name)
- features = df.columns
- features_object_list = [feature for feature in features]
- describe = df.describe(include='all')
- print(describe)
- return describe.reset_index(), gr.Dropdown.update(choices = features_object_list), gr.Dropdown.update(choices = features_object_list)
-
-def feature_select(dataset, feature, hue = None):
- df = pd.read_csv(dataset.name)
- non_numeric_cols = df.select_dtypes('object').columns.tolist()
-
- if feature in non_numeric_cols:
- kde = False
- plot2 = plt.figure()
- if hue:
- sns.countplot(x = feature, data = df, palette='rainbow', hue = hue)
- else:
- sns.countplot(x = feature, data = df, palette='rainbow')
- else:
- kde = True
- plot2 = plt.figure()
- if hue:
- sns.boxplot(x = feature, data = df, hue = hue)
- else:
- sns.boxplot(x = feature, data = df )
-
- plot1 = plt.figure()
- if hue:
- sns.histplot(data = df, x = feature, kde = kde, hue = hue, multiple="stack")
- else:
- sns.histplot(data = df, x = feature, kde = kde)
-
- return plot1, plot2
-
-css = """
-footer {display:none !important}
-.overflow-x-scroll {
- overflow-x: scroll !important;
- height: 15rem !important;
- overflow-y: scroll !important;
-}
-
-.max-h-\[30rem\] {max-height: 18rem !important;}
-
-.hover\:bg-orange-50:hover {
- --tw-bg-opacity: 1 !important;
- background-color: rgb(229,225,255) !important;
-}
-
-.output-markdown h2{
- z-index: 14;
- align-self: flex-start;
- min-width: 0px;
- order: 5;
- min-height: 0px;
- height: max-content;
- flex-grow: 0;
- flex-shrink: 0;
- width: calc(100% - 0px);
- margin: 5px 0px;
- white-space: pre-wrap;
- overflow: visible;
- word-break: break-word;
- font-size: 18px !important;
- font-weight: 500 !important;
- color: rgb(9, 23, 71) !important;
- line-height: 1 !important;
- border-radius: 0px !important;
- opacity: 1 !important;
-}
-
-.gr-button-lg {
- z-index: 14;
- width: 113px !important;
- height: 30px !important;
- left: 0px;
- top: 0px;
- padding: 0px;
- cursor: pointer !important;
- background: none rgb(17, 20, 45) !important;
- border: none !important;
- text-align: center !important;
- font-size: 14px !important;
- font-weight: 500 !important;
- color: rgb(255, 255, 255) !important;
- line-height: 1 !important;
- border-radius: 6px !important;
- transition: box-shadow 200ms ease 0s, background 200ms ease 0s !important;
- box-shadow: none !important;
-}
-.gr-button-lg:hover{
- z-index: 14;
- width: 113px !important;
- height: 30px !important;
- left: 0px;
- top: 0px;
- padding: 0px;
- cursor: pointer !important;
- background: none rgb(37, 56, 133) !important;
- border: none !important;
- text-align: center !important;
- font-size: 14px !important;
- font-weight: 500 !important;
- color: rgb(255, 255, 255) !important;
- line-height: 1 !important;
- border-radius: 6px !important;
- transition: box-shadow 200ms ease 0s, background 200ms ease 0s !important;
- box-shadow: rgb(0 0 0 / 23%) 0px 1px 7px 0px !important;
-}
-"""
-
-with gr.Blocks(title="Describe Dataset | Data Science Dojo", css = css) as demo:
- gr.Markdown("""## Input Dataset""")
- with gr.Row():
- dataset = gr.File()
- gr.Markdown("""## Dataset Description""")
- with gr.Row():
- dataframe = gr.Dataframe()
- gr.Markdown("""## Select the feature to visualize""")
- with gr.Row():
- with gr.Column():
- features = gr.Dropdown(label="Select feature to visualize")
- with gr.Column():
- hue = gr.Dropdown(label="Select hue")
- with gr.Row():
- btn = gr.Button("Visualize")
-
- gr.Markdown("""## Visualization""")
- with gr.Row():
- plot1 = gr.Plot()
- with gr.Row():
- plot2 = gr.Plot()
-
- gr.Examples(
- examples=[["boston.csv"]],
- fn = dataset_change,
- inputs = dataset,
- outputs = [dataframe, features, hue],
- cache_examples=True
- )
-
- dataset.change(fn=dataset_change,
- inputs = dataset,
- outputs = [dataframe, features, hue]
- )
-
- btn.click(fn=feature_select,
- inputs=[dataset, features, hue],
- outputs=[plot1, plot2]
- )
-
-demo.launch(debug=True)
\ No newline at end of file
diff --git a/spaces/davanstrien/arch_demo/README.md b/spaces/davanstrien/arch_demo/README.md
deleted file mode 100644
index 66754ef2772f6a5db3af6ba27d1a1c0527a6d7f3..0000000000000000000000000000000000000000
--- a/spaces/davanstrien/arch_demo/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: ARCH Demo
-emoji: 📸
-colorFrom: red
-colorTo: blue
-sdk: gradio
-sdk_version: 3.27.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/davidpiscasio/unpaired-img2img/options/base_options.py b/spaces/davidpiscasio/unpaired-img2img/options/base_options.py
deleted file mode 100644
index c096d872aac445fb6fe311dca1d1ff5723bd11f5..0000000000000000000000000000000000000000
--- a/spaces/davidpiscasio/unpaired-img2img/options/base_options.py
+++ /dev/null
@@ -1,137 +0,0 @@
-import argparse
-import os
-from util import util
-import torch
-import models
-import data
-
-
-class BaseOptions():
- """This class defines options used during both training and test time.
-
- It also implements several helper functions such as parsing, printing, and saving the options.
- It also gathers additional options defined in functions in both dataset class and model class.
- """
-
- def __init__(self):
- """Reset the class; indicates the class hasn't been initailized"""
- self.initialized = False
-
- def initialize(self, parser):
- """Define the common options that are used in both training and test."""
- # basic parameters
- parser.add_argument('--dataroot', required=False, help='path to images (should have subfolders trainA, trainB, valA, valB, etc)')
- parser.add_argument('--name', type=str, default='experiment_name', help='name of the experiment. It decides where to store samples and models')
- parser.add_argument('--use_wandb', action='store_true', help='use wandb')
- parser.add_argument('--gpu_ids', type=str, default='-1', help='gpu ids: e.g. 0 0,1,2, 0,2. use -1 for CPU')
- parser.add_argument('--checkpoints_dir', type=str, default='./checkpoints', help='models are saved here')
- # model parameters
- parser.add_argument('--model', type=str, default='cycle_gan', help='chooses which model to use. [cycle_gan | pix2pix | test | colorization]')
- parser.add_argument('--input_nc', type=int, default=3, help='# of input image channels: 3 for RGB and 1 for grayscale')
- parser.add_argument('--output_nc', type=int, default=3, help='# of output image channels: 3 for RGB and 1 for grayscale')
- parser.add_argument('--ngf', type=int, default=64, help='# of gen filters in the last conv layer')
- parser.add_argument('--ndf', type=int, default=64, help='# of discrim filters in the first conv layer')
- parser.add_argument('--netD', type=str, default='basic', help='specify discriminator architecture [basic | n_layers | pixel]. The basic model is a 70x70 PatchGAN. n_layers allows you to specify the layers in the discriminator')
- parser.add_argument('--netG', type=str, default='resnet_9blocks', help='specify generator architecture [resnet_9blocks | resnet_6blocks | unet_256 | unet_128]')
- parser.add_argument('--n_layers_D', type=int, default=3, help='only used if netD==n_layers')
- parser.add_argument('--norm', type=str, default='instance', help='instance normalization or batch normalization [instance | batch | none]')
- parser.add_argument('--init_type', type=str, default='normal', help='network initialization [normal | xavier | kaiming | orthogonal]')
- parser.add_argument('--init_gain', type=float, default=0.02, help='scaling factor for normal, xavier and orthogonal.')
- parser.add_argument('--no_dropout', action='store_true', help='no dropout for the generator')
- # dataset parameters
- parser.add_argument('--dataset_mode', type=str, default='unaligned', help='chooses how datasets are loaded. [unaligned | aligned | single | colorization]')
- parser.add_argument('--direction', type=str, default='AtoB', help='AtoB or BtoA')
- parser.add_argument('--serial_batches', action='store_true', help='if true, takes images in order to make batches, otherwise takes them randomly')
- parser.add_argument('--num_threads', default=4, type=int, help='# threads for loading data')
- parser.add_argument('--batch_size', type=int, default=1, help='input batch size')
- parser.add_argument('--load_size', type=int, default=286, help='scale images to this size')
- parser.add_argument('--crop_size', type=int, default=256, help='then crop to this size')
- parser.add_argument('--max_dataset_size', type=int, default=float("inf"), help='Maximum number of samples allowed per dataset. If the dataset directory contains more than max_dataset_size, only a subset is loaded.')
- parser.add_argument('--preprocess', type=str, default='resize_and_crop', help='scaling and cropping of images at load time [resize_and_crop | crop | scale_width | scale_width_and_crop | none]')
- parser.add_argument('--no_flip', action='store_true', help='if specified, do not flip the images for data augmentation')
- parser.add_argument('--display_winsize', type=int, default=256, help='display window size for both visdom and HTML')
- # additional parameters
- parser.add_argument('--epoch', type=str, default='latest', help='which epoch to load? set to latest to use latest cached model')
- parser.add_argument('--load_iter', type=int, default='0', help='which iteration to load? if load_iter > 0, the code will load models by iter_[load_iter]; otherwise, the code will load models by [epoch]')
- parser.add_argument('--verbose', action='store_true', help='if specified, print more debugging information')
- parser.add_argument('--suffix', default='', type=str, help='customized suffix: opt.name = opt.name + suffix: e.g., {model}_{netG}_size{load_size}')
- self.initialized = True
- return parser
-
- def gather_options(self):
- """Initialize our parser with basic options(only once).
- Add additional model-specific and dataset-specific options.
- These options are defined in the function
- in model and dataset classes.
- """
- if not self.initialized: # check if it has been initialized
- parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
- parser = self.initialize(parser)
-
- # get the basic options
- opt, _ = parser.parse_known_args()
-
- # modify model-related parser options
- model_name = opt.model
- model_option_setter = models.get_option_setter(model_name)
- parser = model_option_setter(parser, self.isTrain)
- opt, _ = parser.parse_known_args() # parse again with new defaults
-
- # modify dataset-related parser options
- dataset_name = opt.dataset_mode
- dataset_option_setter = data.get_option_setter(dataset_name)
- parser = dataset_option_setter(parser, self.isTrain)
-
- # save and return the parser
- self.parser = parser
- return parser.parse_args()
-
- def print_options(self, opt):
- """Print and save options
-
- It will print both current options and default values(if different).
- It will save options into a text file / [checkpoints_dir] / opt.txt
- """
- message = ''
- message += '----------------- Options ---------------\n'
- for k, v in sorted(vars(opt).items()):
- comment = ''
- default = self.parser.get_default(k)
- if v != default:
- comment = '\t[default: %s]' % str(default)
- message += '{:>25}: {:<30}{}\n'.format(str(k), str(v), comment)
- message += '----------------- End -------------------'
- print(message)
-
- # save to the disk
- expr_dir = os.path.join(opt.checkpoints_dir, opt.name)
- util.mkdirs(expr_dir)
- file_name = os.path.join(expr_dir, '{}_opt.txt'.format(opt.phase))
- with open(file_name, 'wt') as opt_file:
- opt_file.write(message)
- opt_file.write('\n')
-
- def parse(self):
- """Parse our options, create checkpoints directory suffix, and set up gpu device."""
- opt = self.gather_options()
- opt.isTrain = self.isTrain # train or test
-
- # process opt.suffix
- if opt.suffix:
- suffix = ('_' + opt.suffix.format(**vars(opt))) if opt.suffix != '' else ''
- opt.name = opt.name + suffix
-
- self.print_options(opt)
-
- # set gpu ids
- str_ids = opt.gpu_ids.split(',')
- opt.gpu_ids = []
- for str_id in str_ids:
- id = int(str_id)
- if id >= 0:
- opt.gpu_ids.append(id)
- if len(opt.gpu_ids) > 0:
- torch.cuda.set_device(opt.gpu_ids[0])
-
- self.opt = opt
- return self.opt
diff --git a/spaces/dawood/Kanye-AI/vdecoder/hifigan/utils.py b/spaces/dawood/Kanye-AI/vdecoder/hifigan/utils.py
deleted file mode 100644
index 9c93c996d3cc73c30d71c1fc47056e4230f35c0f..0000000000000000000000000000000000000000
--- a/spaces/dawood/Kanye-AI/vdecoder/hifigan/utils.py
+++ /dev/null
@@ -1,68 +0,0 @@
-import glob
-import os
-import matplotlib
-import torch
-from torch.nn.utils import weight_norm
-# matplotlib.use("Agg")
-import matplotlib.pylab as plt
-
-
-def plot_spectrogram(spectrogram):
- fig, ax = plt.subplots(figsize=(10, 2))
- im = ax.imshow(spectrogram, aspect="auto", origin="lower",
- interpolation='none')
- plt.colorbar(im, ax=ax)
-
- fig.canvas.draw()
- plt.close()
-
- return fig
-
-
-def init_weights(m, mean=0.0, std=0.01):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- m.weight.data.normal_(mean, std)
-
-
-def apply_weight_norm(m):
- classname = m.__class__.__name__
- if classname.find("Conv") != -1:
- weight_norm(m)
-
-
-def get_padding(kernel_size, dilation=1):
- return int((kernel_size*dilation - dilation)/2)
-
-
-def load_checkpoint(filepath, device):
- assert os.path.isfile(filepath)
- print("Loading '{}'".format(filepath))
- checkpoint_dict = torch.load(filepath, map_location=device)
- print("Complete.")
- return checkpoint_dict
-
-
-def save_checkpoint(filepath, obj):
- print("Saving checkpoint to {}".format(filepath))
- torch.save(obj, filepath)
- print("Complete.")
-
-
-def del_old_checkpoints(cp_dir, prefix, n_models=2):
- pattern = os.path.join(cp_dir, prefix + '????????')
- cp_list = glob.glob(pattern) # get checkpoint paths
- cp_list = sorted(cp_list)# sort by iter
- if len(cp_list) > n_models: # if more than n_models models are found
- for cp in cp_list[:-n_models]:# delete the oldest models other than lastest n_models
- open(cp, 'w').close()# empty file contents
- os.unlink(cp)# delete file (move to trash when using Colab)
-
-
-def scan_checkpoint(cp_dir, prefix):
- pattern = os.path.join(cp_dir, prefix + '????????')
- cp_list = glob.glob(pattern)
- if len(cp_list) == 0:
- return None
- return sorted(cp_list)[-1]
-
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/events.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/events.py
deleted file mode 100644
index ce1a442fb4fe71cbbd11c6bbc51dc4f9b3e7ac78..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/events.py
+++ /dev/null
@@ -1,361 +0,0 @@
-"""Contains all of the events that can be triggered in a gr.Blocks() app, with the exception
-of the on-page-load event, which is defined in gr.Blocks().load()."""
-
-from __future__ import annotations
-
-from typing import TYPE_CHECKING, Any, Callable, Literal, Sequence
-
-from gradio_client.documentation import document, set_documentation_group
-
-from gradio.blocks import Block
-from gradio.deprecation import warn_deprecation
-from gradio.helpers import EventData
-from gradio.utils import get_cancel_function
-
-if TYPE_CHECKING: # Only import for type checking (is False at runtime).
- from gradio.components import Component
-
-set_documentation_group("events")
-
-
-def set_cancel_events(
- block: Block, event_name: str, cancels: None | dict[str, Any] | list[dict[str, Any]]
-):
- if cancels:
- if not isinstance(cancels, list):
- cancels = [cancels]
- cancel_fn, fn_indices_to_cancel = get_cancel_function(cancels)
- block.set_event_trigger(
- event_name,
- cancel_fn,
- inputs=None,
- outputs=None,
- queue=False,
- preprocess=False,
- cancels=fn_indices_to_cancel,
- )
-
-
-class EventListener(Block):
- def __init__(self: Any):
- for event_listener_class in EventListener.__subclasses__():
- if isinstance(self, event_listener_class):
- event_listener_class.__init__(self)
-
-
-class Dependency(dict):
- def __init__(self, trigger, key_vals, dep_index):
- super().__init__(key_vals)
- self.trigger = trigger
- self.then = EventListenerMethod(
- self.trigger,
- "then",
- trigger_after=dep_index,
- trigger_only_on_success=False,
- )
- """
- Triggered after directly preceding event is completed, regardless of success or failure.
- """
- self.success = EventListenerMethod(
- self.trigger,
- "success",
- trigger_after=dep_index,
- trigger_only_on_success=True,
- )
- """
- Triggered after directly preceding event is completed, if it was successful.
- """
-
-
-class EventListenerMethod:
- """
- Triggered on an event deployment.
- """
-
- def __init__(
- self,
- trigger: Block,
- event_name: str,
- show_progress: Literal["full", "minimal", "hidden"] = "full",
- callback: Callable | None = None,
- trigger_after: int | None = None,
- trigger_only_on_success: bool = False,
- ):
- self.trigger = trigger
- self.event_name = event_name
- self.show_progress = show_progress
- self.callback = callback
- self.trigger_after = trigger_after
- self.trigger_only_on_success = trigger_only_on_success
-
- def __call__(
- self,
- fn: Callable | None,
- inputs: Component | Sequence[Component] | set[Component] | None = None,
- outputs: Component | Sequence[Component] | None = None,
- api_name: str | None | Literal[False] = None,
- status_tracker: None = None,
- scroll_to_output: bool = False,
- show_progress: Literal["full", "minimal", "hidden"] = "full",
- queue: bool | None = None,
- batch: bool = False,
- max_batch_size: int = 4,
- preprocess: bool = True,
- postprocess: bool = True,
- cancels: dict[str, Any] | list[dict[str, Any]] | None = None,
- every: float | None = None,
- _js: str | None = None,
- ) -> Dependency:
- """
- Parameters:
- fn: the function to wrap an interface around. Often a machine learning model's prediction function. Each parameter of the function corresponds to one input component, and the function should return a single value or a tuple of values, with each element in the tuple corresponding to one output component.
- inputs: List of gradio.components to use as inputs. If the function takes no inputs, this should be an empty list.
- outputs: List of gradio.components to use as outputs. If the function returns no outputs, this should be an empty list.
- api_name: Defines how the endpoint appears in the API docs. Can be a string, None, or False. If False, the endpoint will not be exposed in the api docs. If set to None, the endpoint will be exposed in the api docs as an unnamed endpoint, although this behavior will be changed in Gradio 4.0. If set to a string, the endpoint will be exposed in the api docs with the given name.
- scroll_to_output: If True, will scroll to output component on completion
- show_progress: If True, will show progress animation while pending
- queue: If True, will place the request on the queue, if the queue has been enabled. If False, will not put this event on the queue, even if the queue has been enabled. If None, will use the queue setting of the gradio app.
- batch: If True, then the function should process a batch of inputs, meaning that it should accept a list of input values for each parameter. The lists should be of equal length (and be up to length `max_batch_size`). The function is then *required* to return a tuple of lists (even if there is only 1 output component), with each list in the tuple corresponding to one output component.
- max_batch_size: Maximum number of inputs to batch together if this is called from the queue (only relevant if batch=True)
- preprocess: If False, will not run preprocessing of component data before running 'fn' (e.g. leaving it as a base64 string if this method is called with the `Image` component).
- postprocess: If False, will not run postprocessing of component data before returning 'fn' output to the browser.
- cancels: A list of other events to cancel when This listener is triggered. For example, setting cancels=[click_event] will cancel the click_event, where click_event is the return value of another components .click method. Functions that have not yet run (or generators that are iterating) will be cancelled, but functions that are currently running will be allowed to finish.
- every: Run this event 'every' number of seconds while the client connection is open. Interpreted in seconds. Queue must be enabled.
- """
- if status_tracker:
- warn_deprecation(
- "The 'status_tracker' parameter has been deprecated and has no effect."
- )
- if self.event_name == "stop":
- warn_deprecation(
- "The `stop` event on Video and Audio has been deprecated and will be remove in a future version. Use `ended` instead."
- )
-
- if isinstance(self, Streamable):
- self.check_streamable()
- if isinstance(show_progress, bool):
- show_progress = "full" if show_progress else "hidden"
-
- dep, dep_index = self.trigger.set_event_trigger(
- self.event_name,
- fn,
- inputs,
- outputs,
- preprocess=preprocess,
- postprocess=postprocess,
- scroll_to_output=scroll_to_output,
- show_progress=show_progress
- if show_progress is not None
- else self.show_progress,
- api_name=api_name,
- js=_js,
- queue=queue,
- batch=batch,
- max_batch_size=max_batch_size,
- every=every,
- trigger_after=self.trigger_after,
- trigger_only_on_success=self.trigger_only_on_success,
- )
- set_cancel_events(self.trigger, self.event_name, cancels)
- if self.callback:
- self.callback()
- return Dependency(self.trigger, dep, dep_index)
-
-
-@document("*change", inherit=True)
-class Changeable(EventListener):
- def __init__(self):
- self.change = EventListenerMethod(self, "change")
- """
- This listener is triggered when the component's value changes either because of user input (e.g. a user types in a textbox) OR because of a function update (e.g. an image receives a value from the output of an event trigger).
- See `.input()` for a listener that is only triggered by user input.
- This method can be used when this component is in a Gradio Blocks.
- """
-
-
-@document("*input", inherit=True)
-class Inputable(EventListener):
- def __init__(self):
- self.input = EventListenerMethod(self, "input")
- """
- This listener is triggered when the user changes the value of the component.
- This method can be used when this component is in a Gradio Blocks.
- """
-
-
-@document("*click", inherit=True)
-class Clickable(EventListener):
- def __init__(self):
- self.click = EventListenerMethod(self, "click")
- """
- This listener is triggered when the component (e.g. a button) is clicked.
- This method can be used when this component is in a Gradio Blocks.
- """
-
-
-@document("*submit", inherit=True)
-class Submittable(EventListener):
- def __init__(self):
- self.submit = EventListenerMethod(self, "submit")
- """
- This listener is triggered when the user presses the Enter key while the component (e.g. a textbox) is focused.
- This method can be used when this component is in a Gradio Blocks.
- """
-
-
-@document("*edit", inherit=True)
-class Editable(EventListener):
- def __init__(self):
- self.edit = EventListenerMethod(self, "edit")
- """
- This listener is triggered when the user edits the component (e.g. image) using the
- built-in editor. This method can be used when this component is in a Gradio Blocks.
- """
-
-
-@document("*clear", inherit=True)
-class Clearable(EventListener):
- def __init__(self):
- self.clear = EventListenerMethod(self, "clear")
- """
- This listener is triggered when the user clears the component (e.g. image or audio)
- using the X button for the component. This method can be used when this component is in a Gradio Blocks.
- """
-
-
-@document("*play", "*pause", "*stop", "*end", inherit=True)
-class Playable(EventListener):
- def __init__(self):
- self.play = EventListenerMethod(self, "play")
- """
- This listener is triggered when the user plays the component (e.g. audio or video).
- This method can be used when this component is in a Gradio Blocks.
- """
-
- self.pause = EventListenerMethod(self, "pause")
- """
- This listener is triggered when the media stops playing for any reason (e.g. audio or video).
- This method can be used when this component is in a Gradio Blocks.
- """
-
- self.stop = EventListenerMethod(self, "stop")
- """
- This listener is triggered when the user reaches the end of the media track (e.g. audio or video).
- This method can be used when this component is in a Gradio Blocks.
- """
-
- self.end = EventListenerMethod(self, "end")
- """
- This listener is triggered when the user reaches the end of the media track (e.g. audio or video).
- This method can be used when this component is in a Gradio Blocks.
- """
-
-
-@document("*stream", inherit=True)
-class Streamable(EventListener):
- def __init__(self):
- self.streaming: bool
- self.stream = EventListenerMethod(
- self,
- "stream",
- show_progress="hidden",
- callback=lambda: setattr(self, "streaming", True),
- )
- """
- This listener is triggered when the user streams the component (e.g. a live webcam
- component). This method can be used when this component is in a Gradio Blocks.
- """
-
- def check_streamable(self):
- pass
-
-
-class StreamableOutput(EventListener):
- def __init__(self):
- self.streaming: bool
-
- def stream_output(self, y) -> bytes:
- raise NotImplementedError
-
-
-@document("*start_recording", "*stop_recording", inherit=True)
-class Recordable(EventListener):
- def __init__(self):
- self.start_recording = EventListenerMethod(self, "start_recording")
- """
- This listener is triggered when the user starts recording with the component (e.g. audio or video).
- This method can be used when this component is in a Gradio Blocks.
- """
-
- self.stop_recording = EventListenerMethod(self, "stop_recording")
- """
- This listener is triggered when the user stops recording with the component (e.g. audio or video).
- This method can be used when this component is in a Gradio Blocks.
- """
-
-
-@document("*focus", "*blur", inherit=True)
-class Focusable(EventListener):
- def __init__(self):
- self.focus = EventListenerMethod(self, "focus")
- """
- This listener is triggered when the component is focused (e.g. when the user clicks inside a textbox).
- This method can be used when this component is in a Gradio Blocks.
- """
-
- self.blur = EventListenerMethod(self, "blur")
- """
- This listener is triggered when the component's is unfocused/blurred (e.g. when the user clicks outside of a textbox).
- This method can be used when this component is in a Gradio Blocks.
- """
-
-
-@document("*upload", inherit=True)
-class Uploadable(EventListener):
- def __init__(self):
- self.upload = EventListenerMethod(self, "upload")
- """
- This listener is triggered when the user uploads a file into the component (e.g. when the user uploads a video into a video component).
- This method can be used when this component is in a Gradio Blocks.
- """
-
-
-@document("*release", inherit=True)
-class Releaseable(EventListener):
- def __init__(self):
- self.release = EventListenerMethod(self, "release")
- """
- This listener is triggered when the user releases the mouse on this component (e.g. when the user releases the slider).
- This method can be used when this component is in a Gradio Blocks.
- """
-
-
-@document("*select", inherit=True)
-class Selectable(EventListener):
- def __init__(self):
- self.selectable: bool = False
- self.select = EventListenerMethod(
- self, "select", callback=lambda: setattr(self, "selectable", True)
- )
- """
- This listener is triggered when the user selects from within the Component.
- This event has EventData of type gradio.SelectData that carries information, accessible through SelectData.index and SelectData.value.
- See EventData documentation on how to use this event data.
- """
-
-
-class SelectData(EventData):
- def __init__(self, target: Block | None, data: Any):
- super().__init__(target, data)
- self.index: int | tuple[int, int] = data["index"]
- """
- The index of the selected item. Is a tuple if the component is two dimensional or selection is a range.
- """
- self.value: Any = data["value"]
- """
- The value of the selected item.
- """
- self.selected: bool = data.get("selected", True)
- """
- True if the item was selected, False if deselected.
- """
diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_headers.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_headers.py
deleted file mode 100644
index 846cca3f1d3c3f000de92840a89fb11e35f2083f..0000000000000000000000000000000000000000
--- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_headers.py
+++ /dev/null
@@ -1,234 +0,0 @@
-# coding=utf-8
-# Copyright 2022-present, the HuggingFace Inc. team.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""Contains utilities to handle headers to send in calls to Huggingface Hub."""
-from typing import Dict, Optional, Union
-
-from .. import constants
-from ._hf_folder import HfFolder
-from ._runtime import (
- get_fastai_version,
- get_fastcore_version,
- get_hf_hub_version,
- get_python_version,
- get_tf_version,
- get_torch_version,
- is_fastai_available,
- is_fastcore_available,
- is_tf_available,
- is_torch_available,
-)
-from ._validators import validate_hf_hub_args
-
-
-class LocalTokenNotFoundError(EnvironmentError):
- """Raised if local token is required but not found."""
-
-
-@validate_hf_hub_args
-def build_hf_headers(
- *,
- token: Optional[Union[bool, str]] = None,
- is_write_action: bool = False,
- library_name: Optional[str] = None,
- library_version: Optional[str] = None,
- user_agent: Union[Dict, str, None] = None,
-) -> Dict[str, str]:
- """
- Build headers dictionary to send in a HF Hub call.
-
- By default, authorization token is always provided either from argument (explicit
- use) or retrieved from the cache (implicit use). To explicitly avoid sending the
- token to the Hub, set `token=False` or set the `HF_HUB_DISABLE_IMPLICIT_TOKEN`
- environment variable.
-
- In case of an API call that requires write access, an error is thrown if token is
- `None` or token is an organization token (starting with `"api_org***"`).
-
- In addition to the auth header, a user-agent is added to provide information about
- the installed packages (versions of python, huggingface_hub, torch, tensorflow,
- fastai and fastcore).
-
- Args:
- token (`str`, `bool`, *optional*):
- The token to be sent in authorization header for the Hub call:
- - if a string, it is used as the Hugging Face token
- - if `True`, the token is read from the machine (cache or env variable)
- - if `False`, authorization header is not set
- - if `None`, the token is read from the machine only except if
- `HF_HUB_DISABLE_IMPLICIT_TOKEN` env variable is set.
- is_write_action (`bool`, default to `False`):
- Set to True if the API call requires a write access. If `True`, the token
- will be validated (cannot be `None`, cannot start by `"api_org***"`).
- library_name (`str`, *optional*):
- The name of the library that is making the HTTP request. Will be added to
- the user-agent header.
- library_version (`str`, *optional*):
- The version of the library that is making the HTTP request. Will be added
- to the user-agent header.
- user_agent (`str`, `dict`, *optional*):
- The user agent info in the form of a dictionary or a single string. It will
- be completed with information about the installed packages.
-
- Returns:
- A `Dict` of headers to pass in your API call.
-
- Example:
- ```py
- >>> build_hf_headers(token="hf_***") # explicit token
- {"authorization": "Bearer hf_***", "user-agent": ""}
-
- >>> build_hf_headers(token=True) # explicitly use cached token
- {"authorization": "Bearer hf_***",...}
-
- >>> build_hf_headers(token=False) # explicitly don't use cached token
- {"user-agent": ...}
-
- >>> build_hf_headers() # implicit use of the cached token
- {"authorization": "Bearer hf_***",...}
-
- # HF_HUB_DISABLE_IMPLICIT_TOKEN=True # to set as env variable
- >>> build_hf_headers() # token is not sent
- {"user-agent": ...}
-
- >>> build_hf_headers(token="api_org_***", is_write_action=True)
- ValueError: You must use your personal account token for write-access methods.
-
- >>> build_hf_headers(library_name="transformers", library_version="1.2.3")
- {"authorization": ..., "user-agent": "transformers/1.2.3; hf_hub/0.10.2; python/3.10.4; tensorflow/1.55"}
- ```
-
- Raises:
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
- If organization token is passed and "write" access is required.
- [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError)
- If "write" access is required but token is not passed and not saved locally.
- [`EnvironmentError`](https://docs.python.org/3/library/exceptions.html#EnvironmentError)
- If `token=True` but token is not saved locally.
- """
- # Get auth token to send
- token_to_send = get_token_to_send(token)
- _validate_token_to_send(token_to_send, is_write_action=is_write_action)
-
- # Combine headers
- headers = {
- "user-agent": _http_user_agent(
- library_name=library_name,
- library_version=library_version,
- user_agent=user_agent,
- )
- }
- if token_to_send is not None:
- headers["authorization"] = f"Bearer {token_to_send}"
- return headers
-
-
-def get_token_to_send(token: Optional[Union[bool, str]]) -> Optional[str]:
- """Select the token to send from either `token` or the cache."""
- # Case token is explicitly provided
- if isinstance(token, str):
- return token
-
- # Case token is explicitly forbidden
- if token is False:
- return None
-
- # Token is not provided: we get it from local cache
- cached_token = HfFolder().get_token()
-
- # Case token is explicitly required
- if token is True:
- if cached_token is None:
- raise LocalTokenNotFoundError(
- "Token is required (`token=True`), but no token found. You"
- " need to provide a token or be logged in to Hugging Face with"
- " `huggingface-cli login` or `huggingface_hub.login`. See"
- " https://huggingface.co/settings/tokens."
- )
- return cached_token
-
- # Case implicit use of the token is forbidden by env variable
- if constants.HF_HUB_DISABLE_IMPLICIT_TOKEN:
- return None
-
- # Otherwise: we use the cached token as the user has not explicitly forbidden it
- return cached_token
-
-
-def _validate_token_to_send(token: Optional[str], is_write_action: bool) -> None:
- if is_write_action:
- if token is None:
- raise ValueError(
- "Token is required (write-access action) but no token found. You need"
- " to provide a token or be logged in to Hugging Face with"
- " `huggingface-cli login` or `huggingface_hub.login`. See"
- " https://huggingface.co/settings/tokens."
- )
- if token.startswith("api_org"):
- raise ValueError(
- "You must use your personal account token for write-access methods. To"
- " generate a write-access token, go to"
- " https://huggingface.co/settings/tokens"
- )
-
-
-def _http_user_agent(
- *,
- library_name: Optional[str] = None,
- library_version: Optional[str] = None,
- user_agent: Union[Dict, str, None] = None,
-) -> str:
- """Format a user-agent string containing information about the installed packages.
-
- Args:
- library_name (`str`, *optional*):
- The name of the library that is making the HTTP request.
- library_version (`str`, *optional*):
- The version of the library that is making the HTTP request.
- user_agent (`str`, `dict`, *optional*):
- The user agent info in the form of a dictionary or a single string.
-
- Returns:
- The formatted user-agent string.
- """
- if library_name is not None:
- ua = f"{library_name}/{library_version}"
- else:
- ua = "unknown/None"
- ua += f"; hf_hub/{get_hf_hub_version()}"
- ua += f"; python/{get_python_version()}"
-
- if not constants.HF_HUB_DISABLE_TELEMETRY:
- if is_torch_available():
- ua += f"; torch/{get_torch_version()}"
- if is_tf_available():
- ua += f"; tensorflow/{get_tf_version()}"
- if is_fastai_available():
- ua += f"; fastai/{get_fastai_version()}"
- if is_fastcore_available():
- ua += f"; fastcore/{get_fastcore_version()}"
-
- if isinstance(user_agent, dict):
- ua += "; " + "; ".join(f"{k}/{v}" for k, v in user_agent.items())
- elif isinstance(user_agent, str):
- ua += "; " + user_agent
-
- return _deduplicate_user_agent(ua)
-
-
-def _deduplicate_user_agent(user_agent: str) -> str:
- """Deduplicate redundant information in the generated user-agent."""
- # Split around ";" > Strip whitespaces > Store as dict keys (ensure unicity) > format back as string
- # Order is implicitly preserved by dictionary structure (see https://stackoverflow.com/a/53657523).
- return "; ".join({key.strip(): None for key in user_agent.split(";")}.keys())
diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/schedulers/scheduling_repaint.py b/spaces/declare-lab/tango/diffusers/src/diffusers/schedulers/scheduling_repaint.py
deleted file mode 100644
index 96af210f06b10513ec72277315c9c1a84c3a5bef..0000000000000000000000000000000000000000
--- a/spaces/declare-lab/tango/diffusers/src/diffusers/schedulers/scheduling_repaint.py
+++ /dev/null
@@ -1,329 +0,0 @@
-# Copyright 2023 ETH Zurich Computer Vision Lab and The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import math
-from dataclasses import dataclass
-from typing import Optional, Tuple, Union
-
-import numpy as np
-import torch
-
-from ..configuration_utils import ConfigMixin, register_to_config
-from ..utils import BaseOutput, randn_tensor
-from .scheduling_utils import SchedulerMixin
-
-
-@dataclass
-class RePaintSchedulerOutput(BaseOutput):
- """
- Output class for the scheduler's step function output.
-
- Args:
- prev_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images):
- Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the
- denoising loop.
- pred_original_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images):
- The predicted denoised sample (x_{0}) based on the model output from
- the current timestep. `pred_original_sample` can be used to preview progress or for guidance.
- """
-
- prev_sample: torch.FloatTensor
- pred_original_sample: torch.FloatTensor
-
-
-# Copied from diffusers.schedulers.scheduling_ddpm.betas_for_alpha_bar
-def betas_for_alpha_bar(num_diffusion_timesteps, max_beta=0.999):
- """
- Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of
- (1-beta) over time from t = [0,1].
-
- Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up
- to that part of the diffusion process.
-
-
- Args:
- num_diffusion_timesteps (`int`): the number of betas to produce.
- max_beta (`float`): the maximum beta to use; use values lower than 1 to
- prevent singularities.
-
- Returns:
- betas (`np.ndarray`): the betas used by the scheduler to step the model outputs
- """
-
- def alpha_bar(time_step):
- return math.cos((time_step + 0.008) / 1.008 * math.pi / 2) ** 2
-
- betas = []
- for i in range(num_diffusion_timesteps):
- t1 = i / num_diffusion_timesteps
- t2 = (i + 1) / num_diffusion_timesteps
- betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta))
- return torch.tensor(betas, dtype=torch.float32)
-
-
-class RePaintScheduler(SchedulerMixin, ConfigMixin):
- """
- RePaint is a schedule for DDPM inpainting inside a given mask.
-
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
- [`~SchedulerMixin.from_pretrained`] functions.
-
- For more details, see the original paper: https://arxiv.org/pdf/2201.09865.pdf
-
- Args:
- num_train_timesteps (`int`): number of diffusion steps used to train the model.
- beta_start (`float`): the starting `beta` value of inference.
- beta_end (`float`): the final `beta` value.
- beta_schedule (`str`):
- the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
- `linear`, `scaled_linear`, or `squaredcos_cap_v2`.
- eta (`float`):
- The weight of noise for added noise in a diffusion step. Its value is between 0.0 and 1.0 -0.0 is DDIM and
- 1.0 is DDPM scheduler respectively.
- trained_betas (`np.ndarray`, optional):
- option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc.
- variance_type (`str`):
- options to clip the variance used when adding noise to the denoised sample. Choose from `fixed_small`,
- `fixed_small_log`, `fixed_large`, `fixed_large_log`, `learned` or `learned_range`.
- clip_sample (`bool`, default `True`):
- option to clip predicted sample between -1 and 1 for numerical stability.
-
- """
-
- order = 1
-
- @register_to_config
- def __init__(
- self,
- num_train_timesteps: int = 1000,
- beta_start: float = 0.0001,
- beta_end: float = 0.02,
- beta_schedule: str = "linear",
- eta: float = 0.0,
- trained_betas: Optional[np.ndarray] = None,
- clip_sample: bool = True,
- ):
- if trained_betas is not None:
- self.betas = torch.from_numpy(trained_betas)
- elif beta_schedule == "linear":
- self.betas = torch.linspace(beta_start, beta_end, num_train_timesteps, dtype=torch.float32)
- elif beta_schedule == "scaled_linear":
- # this schedule is very specific to the latent diffusion model.
- self.betas = (
- torch.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32) ** 2
- )
- elif beta_schedule == "squaredcos_cap_v2":
- # Glide cosine schedule
- self.betas = betas_for_alpha_bar(num_train_timesteps)
- elif beta_schedule == "sigmoid":
- # GeoDiff sigmoid schedule
- betas = torch.linspace(-6, 6, num_train_timesteps)
- self.betas = torch.sigmoid(betas) * (beta_end - beta_start) + beta_start
- else:
- raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}")
-
- self.alphas = 1.0 - self.betas
- self.alphas_cumprod = torch.cumprod(self.alphas, dim=0)
- self.one = torch.tensor(1.0)
-
- self.final_alpha_cumprod = torch.tensor(1.0)
-
- # standard deviation of the initial noise distribution
- self.init_noise_sigma = 1.0
-
- # setable values
- self.num_inference_steps = None
- self.timesteps = torch.from_numpy(np.arange(0, num_train_timesteps)[::-1].copy())
-
- self.eta = eta
-
- def scale_model_input(self, sample: torch.FloatTensor, timestep: Optional[int] = None) -> torch.FloatTensor:
- """
- Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
- current timestep.
-
- Args:
- sample (`torch.FloatTensor`): input sample
- timestep (`int`, optional): current timestep
-
- Returns:
- `torch.FloatTensor`: scaled input sample
- """
- return sample
-
- def set_timesteps(
- self,
- num_inference_steps: int,
- jump_length: int = 10,
- jump_n_sample: int = 10,
- device: Union[str, torch.device] = None,
- ):
- num_inference_steps = min(self.config.num_train_timesteps, num_inference_steps)
- self.num_inference_steps = num_inference_steps
-
- timesteps = []
-
- jumps = {}
- for j in range(0, num_inference_steps - jump_length, jump_length):
- jumps[j] = jump_n_sample - 1
-
- t = num_inference_steps
- while t >= 1:
- t = t - 1
- timesteps.append(t)
-
- if jumps.get(t, 0) > 0:
- jumps[t] = jumps[t] - 1
- for _ in range(jump_length):
- t = t + 1
- timesteps.append(t)
-
- timesteps = np.array(timesteps) * (self.config.num_train_timesteps // self.num_inference_steps)
- self.timesteps = torch.from_numpy(timesteps).to(device)
-
- def _get_variance(self, t):
- prev_timestep = t - self.config.num_train_timesteps // self.num_inference_steps
-
- alpha_prod_t = self.alphas_cumprod[t]
- alpha_prod_t_prev = self.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else self.final_alpha_cumprod
- beta_prod_t = 1 - alpha_prod_t
- beta_prod_t_prev = 1 - alpha_prod_t_prev
-
- # For t > 0, compute predicted variance βt (see formula (6) and (7) from
- # https://arxiv.org/pdf/2006.11239.pdf) and sample from it to get
- # previous sample x_{t-1} ~ N(pred_prev_sample, variance) == add
- # variance to pred_sample
- # Is equivalent to formula (16) in https://arxiv.org/pdf/2010.02502.pdf
- # without eta.
- # variance = (1 - alpha_prod_t_prev) / (1 - alpha_prod_t) * self.betas[t]
- variance = (beta_prod_t_prev / beta_prod_t) * (1 - alpha_prod_t / alpha_prod_t_prev)
-
- return variance
-
- def step(
- self,
- model_output: torch.FloatTensor,
- timestep: int,
- sample: torch.FloatTensor,
- original_image: torch.FloatTensor,
- mask: torch.FloatTensor,
- generator: Optional[torch.Generator] = None,
- return_dict: bool = True,
- ) -> Union[RePaintSchedulerOutput, Tuple]:
- """
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
- process from the learned model outputs (most often the predicted noise).
-
- Args:
- model_output (`torch.FloatTensor`): direct output from learned
- diffusion model.
- timestep (`int`): current discrete timestep in the diffusion chain.
- sample (`torch.FloatTensor`):
- current instance of sample being created by diffusion process.
- original_image (`torch.FloatTensor`):
- the original image to inpaint on.
- mask (`torch.FloatTensor`):
- the mask where 0.0 values define which part of the original image to inpaint (change).
- generator (`torch.Generator`, *optional*): random number generator.
- return_dict (`bool`): option for returning tuple rather than
- DDPMSchedulerOutput class
-
- Returns:
- [`~schedulers.scheduling_utils.RePaintSchedulerOutput`] or `tuple`:
- [`~schedulers.scheduling_utils.RePaintSchedulerOutput`] if `return_dict` is True, otherwise a `tuple`. When
- returning a tuple, the first element is the sample tensor.
-
- """
- t = timestep
- prev_timestep = timestep - self.config.num_train_timesteps // self.num_inference_steps
-
- # 1. compute alphas, betas
- alpha_prod_t = self.alphas_cumprod[t]
- alpha_prod_t_prev = self.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else self.final_alpha_cumprod
- beta_prod_t = 1 - alpha_prod_t
-
- # 2. compute predicted original sample from predicted noise also called
- # "predicted x_0" of formula (15) from https://arxiv.org/pdf/2006.11239.pdf
- pred_original_sample = (sample - beta_prod_t**0.5 * model_output) / alpha_prod_t**0.5
-
- # 3. Clip "predicted x_0"
- if self.config.clip_sample:
- pred_original_sample = torch.clamp(pred_original_sample, -1, 1)
-
- # We choose to follow RePaint Algorithm 1 to get x_{t-1}, however we
- # substitute formula (7) in the algorithm coming from DDPM paper
- # (formula (4) Algorithm 2 - Sampling) with formula (12) from DDIM paper.
- # DDIM schedule gives the same results as DDPM with eta = 1.0
- # Noise is being reused in 7. and 8., but no impact on quality has
- # been observed.
-
- # 5. Add noise
- device = model_output.device
- noise = randn_tensor(model_output.shape, generator=generator, device=device, dtype=model_output.dtype)
- std_dev_t = self.eta * self._get_variance(timestep) ** 0.5
-
- variance = 0
- if t > 0 and self.eta > 0:
- variance = std_dev_t * noise
-
- # 6. compute "direction pointing to x_t" of formula (12)
- # from https://arxiv.org/pdf/2010.02502.pdf
- pred_sample_direction = (1 - alpha_prod_t_prev - std_dev_t**2) ** 0.5 * model_output
-
- # 7. compute x_{t-1} of formula (12) from https://arxiv.org/pdf/2010.02502.pdf
- prev_unknown_part = alpha_prod_t_prev**0.5 * pred_original_sample + pred_sample_direction + variance
-
- # 8. Algorithm 1 Line 5 https://arxiv.org/pdf/2201.09865.pdf
- prev_known_part = (alpha_prod_t_prev**0.5) * original_image + ((1 - alpha_prod_t_prev) ** 0.5) * noise
-
- # 9. Algorithm 1 Line 8 https://arxiv.org/pdf/2201.09865.pdf
- pred_prev_sample = mask * prev_known_part + (1.0 - mask) * prev_unknown_part
-
- if not return_dict:
- return (
- pred_prev_sample,
- pred_original_sample,
- )
-
- return RePaintSchedulerOutput(prev_sample=pred_prev_sample, pred_original_sample=pred_original_sample)
-
- def undo_step(self, sample, timestep, generator=None):
- n = self.config.num_train_timesteps // self.num_inference_steps
-
- for i in range(n):
- beta = self.betas[timestep + i]
- if sample.device.type == "mps":
- # randn does not work reproducibly on mps
- noise = randn_tensor(sample.shape, dtype=sample.dtype, generator=generator)
- noise = noise.to(sample.device)
- else:
- noise = randn_tensor(sample.shape, generator=generator, device=sample.device, dtype=sample.dtype)
-
- # 10. Algorithm 1 Line 10 https://arxiv.org/pdf/2201.09865.pdf
- sample = (1 - beta) ** 0.5 * sample + beta**0.5 * noise
-
- return sample
-
- def add_noise(
- self,
- original_samples: torch.FloatTensor,
- noise: torch.FloatTensor,
- timesteps: torch.IntTensor,
- ) -> torch.FloatTensor:
- raise NotImplementedError("Use `DDPMScheduler.add_noise()` to train for sampling with RePaint.")
-
- def __len__(self):
- return self.config.num_train_timesteps
diff --git a/spaces/deepwisdom/MetaGPT/examples/search_with_specific_engine.py b/spaces/deepwisdom/MetaGPT/examples/search_with_specific_engine.py
deleted file mode 100644
index 4423011e48daa6bebd00bbff62414108b8f8a1c9..0000000000000000000000000000000000000000
--- a/spaces/deepwisdom/MetaGPT/examples/search_with_specific_engine.py
+++ /dev/null
@@ -1,23 +0,0 @@
-#!/usr/bin/env python
-# -*- coding: utf-8 -*-
-"""
-@Modified By: mashenquan, 2023-8-9, fix-bug: cannot find metagpt module.
-"""
-import asyncio
-from pathlib import Path
-import sys
-sys.path.append(str(Path(__file__).resolve().parent.parent))
-from metagpt.roles import Searcher
-from metagpt.tools import SearchEngineType
-
-
-async def main():
- # Serper API
- #await Searcher(engine = SearchEngineType.SERPER_GOOGLE).run(["What are some good sun protection products?","What are some of the best beaches?"])
- # SerpAPI
- #await Searcher(engine=SearchEngineType.SERPAPI_GOOGLE).run("What are the best ski brands for skiers?")
- # Google API
- await Searcher(engine=SearchEngineType.DIRECT_GOOGLE).run("What are the most interesting human facts?")
-
-if __name__ == '__main__':
- asyncio.run(main())
diff --git a/spaces/deepwisdom/MetaGPT/tests/metagpt/utils/test_code_parser.py b/spaces/deepwisdom/MetaGPT/tests/metagpt/utils/test_code_parser.py
deleted file mode 100644
index 707b558e1fb991bea5c253f52548895f1a3126d8..0000000000000000000000000000000000000000
--- a/spaces/deepwisdom/MetaGPT/tests/metagpt/utils/test_code_parser.py
+++ /dev/null
@@ -1,140 +0,0 @@
-#!/usr/bin/env python
-# coding: utf-8
-"""
-@Time : 2023/7/10 17:14
-@Author : chengmaoyu
-@File : test_code_parser.py
-"""
-
-import pytest
-
-from metagpt.utils.common import CodeParser
-
-t_text = '''
-## Required Python third-party packages
-```python
-"""
-flask==1.1.2
-pygame==2.0.1
-"""
-```
-
-## Required Other language third-party packages
-```python
-"""
-No third-party packages required for other languages.
-"""
-```
-
-## Full API spec
-```python
-"""
-openapi: 3.0.0
-info:
- title: Web Snake Game API
- version: 1.0.0
-paths:
- /game:
- get:
- summary: Get the current game state
- responses:
- '200':
- description: A JSON object of the game state
- post:
- summary: Send a command to the game
- requestBody:
- required: true
- content:
- application/json:
- schema:
- type: object
- properties:
- command:
- type: string
- responses:
- '200':
- description: A JSON object of the updated game state
-"""
-```
-
-## Logic Analysis
-```python
-[
- ("app.py", "Main entry point for the Flask application. Handles HTTP requests and responses."),
- ("game.py", "Contains the Game and Snake classes. Handles the game logic."),
- ("static/js/script.js", "Handles user interactions and updates the game UI."),
- ("static/css/styles.css", "Defines the styles for the game UI."),
- ("templates/index.html", "The main page of the web application. Displays the game UI.")
-]
-```
-
-## Task list
-```python
-[
- "game.py",
- "app.py",
- "static/css/styles.css",
- "static/js/script.js",
- "templates/index.html"
-]
-```
-
-## Shared Knowledge
-```python
-"""
-'game.py' contains the Game and Snake classes which are responsible for the game logic. The Game class uses an instance of the Snake class.
-
-'app.py' is the main entry point for the Flask application. It creates an instance of the Game class and handles HTTP requests and responses.
-
-'static/js/script.js' is responsible for handling user interactions and updating the game UI based on the game state returned by 'app.py'.
-
-'static/css/styles.css' defines the styles for the game UI.
-
-'templates/index.html' is the main page of the web application. It displays the game UI and loads 'static/js/script.js' and 'static/css/styles.css'.
-"""
-```
-
-## Anything UNCLEAR
-We need clarification on how the high score should be stored. Should it persist across sessions (stored in a database or a file) or should it reset every time the game is restarted? Also, should the game speed increase as the snake grows, or should it remain constant throughout the game?
- '''
-
-
-class TestCodeParser:
- @pytest.fixture
- def parser(self):
- return CodeParser()
-
- @pytest.fixture
- def text(self):
- return t_text
-
- def test_parse_blocks(self, parser, text):
- result = parser.parse_blocks(text)
- print(result)
- assert result == {"title": "content", "title2": "content2"}
-
- def test_parse_block(self, parser, text):
- result = parser.parse_block("title", text)
- print(result)
- assert result == "content"
-
- def test_parse_code(self, parser, text):
- result = parser.parse_code("title", text, "python")
- print(result)
- assert result == "print('hello world')"
-
- def test_parse_str(self, parser, text):
- result = parser.parse_str("title", text, "python")
- print(result)
- assert result == "hello world"
-
- def test_parse_file_list(self, parser, text):
- result = parser.parse_file_list("Task list", text)
- print(result)
- assert result == ['task1', 'task2']
-
-
-if __name__ == '__main__':
- t = TestCodeParser()
- t.test_parse_file_list(CodeParser(), t_text)
- # TestCodeParser.test_parse_file_list()
diff --git a/spaces/devthedeveloper/Bark-with-Voice-Cloning/README.md b/spaces/devthedeveloper/Bark-with-Voice-Cloning/README.md
deleted file mode 100644
index d2fc2675057f50dd3395b5d9831584e264072697..0000000000000000000000000000000000000000
--- a/spaces/devthedeveloper/Bark-with-Voice-Cloning/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Bark with Voice Cloning
-emoji: 📊
-colorFrom: purple
-colorTo: purple
-sdk: gradio
-sdk_version: 3.34.0
-app_file: app.py
-pinned: true
-license: mit
-duplicated_from: kevinwang676/Bark-with-Voice-Cloning
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Easeus Data Recovery WIZARD Professional 4.3.6 (retail) LINK.md b/spaces/diacanFperku/AutoGPT/Easeus Data Recovery WIZARD Professional 4.3.6 (retail) LINK.md
deleted file mode 100644
index 7893896efd13475d8944269cfc7f7c2a2f7c8cf9..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Easeus Data Recovery WIZARD Professional 4.3.6 (retail) LINK.md
+++ /dev/null
@@ -1,19 +0,0 @@
-
-How to Recover Lost Data with EaseUS Data Recovery Wizard Professional 4.3.6 (Retail)
-Have you ever lost important files due to accidental deletion, formatting, virus attack, or other reasons? If so, you may be looking for a reliable and easy-to-use data recovery software to help you get them back. One of the best options on the market is EaseUS Data Recovery Wizard Professional 4.3.6 (Retail), a powerful and comprehensive tool that can recover data from various devices and scenarios.
-easeus Data Recovery WIZARD Professional 4.3.6 (retail) DOWNLOAD » https://gohhs.com/2uFTfW
-In this article, we will show you how to use EaseUS Data Recovery Wizard Professional 4.3.6 (Retail) to recover your lost data in three simple steps. Whether you need to recover data from a hard drive, USB flash drive, memory card, digital camera, or any other storage device, this software can handle it.
-Step 1: Download and Install EaseUS Data Recovery Wizard Professional 4.3.6 (Retail)
-The first step is to download and install EaseUS Data Recovery Wizard Professional 4.3.6 (Retail) on your computer. You can get it from the official website or from any reputable online store that sells software products. Make sure you download the correct version for your operating system (Windows or Mac).
-After downloading the software, run the setup file and follow the instructions to install it on your computer. You may need to enter a license code to activate the software if you purchased it from a retail store. The license code is usually printed on the CD case or the receipt.
-Step 2: Select a Location and Scan for Lost Data
-The second step is to select a location where you lost your data and start scanning for it. Launch EaseUS Data Recovery Wizard Professional 4.3.6 (Retail) and you will see a list of available locations on your computer and connected devices. You can also click on "Specify a location" to manually enter a path or browse for a folder.
-Select the location that contains your lost data and click on "Scan" to start searching for it. The software will first perform a quick scan that will take a few minutes and then a deep scan that will take longer depending on the size and condition of your device. You can pause or stop the scan at any time if you find what you need.
-Step 3: Preview and Recover Your Lost Data
-The final step is to preview and recover your lost data. After the scan is completed, you will see a list of files that were found by the software. You can filter them by file type, date, size, or name to narrow down your search. You can also use the search box to find specific files by keywords.
-
-To preview a file, simply click on it and you will see a thumbnail or a text preview on the right pane. You can also double-click on it to open it in its default program if it is available on your computer. To recover a file, simply check the box next to it and click on "Recover" at the bottom right corner.
-You will be asked to choose a destination folder where you want to save your recovered files. Make sure you do not save them to the same location where you lost them, as this may overwrite them and make them unrecoverable. Choose a different drive or device and click on "OK" to start recovering your files.
-Wait for the recovery process to finish and then check your recovered files in the destination folder. You have successfully recovered your lost data with EaseUS Data Recovery Wizard Professional 4.3.6 (Retail)!
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/Franceschetti Campi Elettromagnetici Pdf Download [WORK].md b/spaces/diacanFperku/AutoGPT/Franceschetti Campi Elettromagnetici Pdf Download [WORK].md
deleted file mode 100644
index a2299f92916db4ca52e4a38770ef72014a2e3bd5..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/Franceschetti Campi Elettromagnetici Pdf Download [WORK].md
+++ /dev/null
@@ -1,6 +0,0 @@
-Franceschetti Campi Elettromagnetici Pdf Download Download File - https://gohhs.com/2uFTCc
-
-Download formatter v2.9.0.9.exe. ... Franceschetti Campi Elettromagnetici Pdf Downloadl ... LoveShhuda Full Movie With English Subtitles Download Torrentl. 1fdad05405
-
-
-
diff --git a/spaces/diacanFperku/AutoGPT/G V Kumbhojkar Pdf Download VERIFIED.md b/spaces/diacanFperku/AutoGPT/G V Kumbhojkar Pdf Download VERIFIED.md
deleted file mode 100644
index fd603ad1727628d3429672e389ae34a416ba845b..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/G V Kumbhojkar Pdf Download VERIFIED.md
+++ /dev/null
@@ -1,117 +0,0 @@
-
-G V Kumbhojkar PDF Download: A Comprehensive Resource for Applied Mathematics
-
-If you are looking for a reliable and easy-to-follow textbook for applied mathematics, you might want to check out G V Kumbhojkar PDF download. This book covers various topics in mathematics that are essential for engineering students, such as differential equations, Laplace transforms, Fourier series, complex analysis, and more. In this article, we will give you an overview of the book and its features, as well as some tips on how to download it for free.
-
-What is G V Kumbhojkar PDF Download?
-
-G V Kumbhojkar PDF download is a digital version of the book Applied Mathematics III by G V Kumbhojkar, a professor of mathematics at the University of Mumbai. The book is designed for engineering students who are studying mathematics as a core subject in their curriculum. The book follows the syllabus of the University of Mumbai and other universities in India.
-g v kumbhojkar pdf download Download ✸ https://gohhs.com/2uFTLq
-
-The book has 14 chapters that cover various topics in applied mathematics, such as:
-
-
-Linear differential equations of higher order
-Series solutions of differential equations
-Legendre's equation and Legendre polynomials
-Bessel's equation and Bessel functions
-Laplace transforms and their applications
-Fourier series and Fourier transforms
-Partial differential equations and their applications
-Functions of a complex variable and analytic functions
-Cauchy-Riemann equations and harmonic functions
-Complex integration and Cauchy's theorem
-Cauchy's integral formula and Taylor's series
-Laurent's series and residue theorem
-Conformal mapping and bilinear transformation
-Special functions and integral transforms
-
-
-The book explains the concepts and methods in a clear and concise manner, with plenty of examples and solved problems. The book also has exercises at the end of each chapter, with answers and hints provided at the end of the book. The book is suitable for self-study as well as classroom learning.
-
-How to Download G V Kumbhojkar PDF Download for Free?
-
-If you want to download G V Kumbhojkar PDF download for free, you have several options to choose from. Here are some of them:
-
-
-You can use Google Drive to access the PDF file of the book. Just click on this link and sign in with your Google account. You can then view or download the file as you wish.
-You can use Academia.edu to download the PDF file of the book. Just click on this link and create a free account or log in with your existing account. You can then download the file or read it online.
-You can use Course Hero to download the PDF file of the book. Just click on this link and create a free account or log in with your existing account. You can then download the file or read it online.
-
-
-However, before you download G V Kumbhojkar PDF download for free, you should be aware of some potential risks and limitations. For instance:
-
-
-The quality and accuracy of the PDF file may not be guaranteed. There may be errors or missing pages in the file.
-The PDF file may not be updated or revised according to the latest edition or syllabus of the book.
-The PDF file may not be legal or authorized by the author or publisher of the book. You may be violating their intellectual property rights by downloading or sharing the file.
-The PDF file may contain viruses or malware that can harm your device or compromise your security.
-
-
-Therefore, if you want to avoid these risks and limitations, you should consider buying the original book from a reputable source. You can find the book on various online platforms, such as Amazon, Flipkart, Snapdeal, etc. The price of the book may vary depending on the seller and delivery options.
-
-Conclusion
-
-G V Kumbhojkar PDF download is a useful resource for engineering students who want to learn applied mathematics. The book covers various topics in mathematics that are relevant for engineering applications, such as differential equations, Laplace transforms, Fourier series, complex analysis, etc. The book explains the concepts and methods in a simple and lucid manner, with plenty of examples and solved problems. The book also has exercises at the end of each chapter, with answers and hints provided at the end of the book.
-
-If you want to download G V Kumbhojkar PDF download for free, you have several options to choose from, such as Google Drive, Academia.edu, Course Hero, etc. However, you should be aware of some potential risks and limitations that come with downloading or sharing unauthorized or illegal copies of the book. You may face issues such as poor quality, outdated content, legal violations, or security threats.
-
-Therefore, if you want to get the best value and experience from G V Kumbhojkar PDF download, you should consider buying the original book from a reliable source. You can find the book on various online platforms, such as Amazon, Flipkart, Snapdeal, etc. The price of the book may vary depending on the seller and delivery options.
-
-
-We hope this article has given you some useful information about G V Kumbhojkar PDF download. If you have any questions or feedback, please feel free to leave a comment below.
-What are the Benefits of G V Kumbhojkar PDF Download?
-
-G V Kumbhojkar PDF download has many benefits for engineering students who want to learn applied mathematics. Some of these benefits are:
-
-
-It saves time and money. You don't have to buy or carry a heavy book around. You can access the PDF file anytime and anywhere with your device and internet connection.
-It enhances learning and understanding. You can read the book at your own pace and convenience. You can also zoom in, highlight, bookmark, or annotate the PDF file as you wish.
-It prepares you for exams and interviews. The book covers the syllabus and topics that are frequently asked in engineering exams and interviews. The book also has exercises and solved problems that help you practice and test your knowledge.
-It boosts your confidence and performance. The book helps you master the concepts and methods of applied mathematics that are essential for engineering applications. The book also helps you develop your analytical and problem-solving skills.
-
-
-What are the Alternatives to G V Kumbhojkar PDF Download?
-
-If you are looking for alternatives to G V Kumbhojkar PDF download, you have some options to choose from. Here are some of them:
-
-
-You can use other books on applied mathematics that are written by different authors or publishers. Some examples are Applied Mathematics by Erwin Kreyszig, Higher Engineering Mathematics by B S Grewal, Advanced Engineering Mathematics by R K Jain and S R K Iyengar, etc.
-You can use online courses or videos on applied mathematics that are offered by various platforms or instructors. Some examples are Khan Academy, Coursera, Udemy, NPTEL, etc.
-You can use online forums or communities on applied mathematics that are created by students or experts. Some examples are Math Stack Exchange, Reddit, Quora, etc.
-
-
-However, before you use any of these alternatives to G V Kumbhojkar PDF download, you should be aware of some potential drawbacks and challenges. For instance:
-
-
-The quality and relevance of the content may not be consistent or guaranteed. There may be errors or gaps in the content.
-The content may not be updated or revised according to the latest edition or syllabus of the book.
-The content may not be suitable or accessible for your level or preference of learning. You may face difficulties in finding or following the content.
-The content may not be free or affordable. You may have to pay a fee or subscription to access the content.
-
-
-Therefore, if you want to use any of these alternatives to G V Kumbhojkar PDF download, you should do some research and comparison before making a decision. You should also check the reviews and ratings of the content from other users or sources.
-How to Use G V Kumbhojkar PDF Download Effectively?
-
-G V Kumbhojkar PDF download is a valuable resource for engineering students who want to learn applied mathematics. However, simply downloading or reading the book is not enough. You need to use the book effectively to get the most out of it. Here are some tips on how to use G V Kumbhojkar PDF download effectively:
-
-
-Plan your study schedule and goals. You should have a clear idea of what topics you want to cover, how much time you have, and what outcomes you expect. You should also set realistic and achievable goals for yourself.
-Review the concepts and methods before solving the problems. You should not jump into the exercises without understanding the theory and logic behind them. You should review the concepts and methods explained in the book and make sure you grasp them well.
-Solve the problems step by step and check your answers. You should not skip any steps or make any assumptions when solving the problems. You should follow the methods and formulas given in the book and show your work clearly. You should also check your answers with the solutions provided at the end of the book or online.
-Practice regularly and revise frequently. You should not cram or procrastinate when studying applied mathematics. You should practice regularly and revise frequently to reinforce your learning and memory. You should also review your mistakes and learn from them.
-Seek help when needed. You should not hesitate to ask for help when you encounter any difficulties or doubts. You can seek help from your teachers, classmates, tutors, online forums, etc. You can also use other resources such as books, videos, courses, etc. to supplement your learning.
-
-
-By following these tips, you can use G V Kumbhojkar PDF download effectively and improve your skills and knowledge in applied mathematics.
-Conclusion
-
-G V Kumbhojkar PDF download is a comprehensive and reliable textbook for applied mathematics. The book covers various topics in mathematics that are essential for engineering students, such as differential equations, Laplace transforms, Fourier series, complex analysis, etc. The book explains the concepts and methods in a simple and lucid manner, with plenty of examples and solved problems. The book also has exercises at the end of each chapter, with answers and hints provided at the end of the book.
-
-If you want to download G V Kumbhojkar PDF download for free, you have several options to choose from, such as Google Drive, Academia.edu, Course Hero, etc. However, you should be aware of some potential risks and limitations that come with downloading or sharing unauthorized or illegal copies of the book. You may face issues such as poor quality, outdated content, legal violations, or security threats.
-
-Therefore, if you want to get the best value and experience from G V Kumbhojkar PDF download, you should consider buying the original book from a reputable source. You can find the book on various online platforms, such as Amazon, Flipkart, Snapdeal, etc. The price of the book may vary depending on the seller and delivery options.
-
-We hope this article has given you some useful information about G V Kumbhojkar PDF download. If you have any questions or feedback, please feel free to leave a comment below.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/diacanFperku/AutoGPT/NEW Download Naruto Shippuden Season 9 Eng Dub Torrent.md b/spaces/diacanFperku/AutoGPT/NEW Download Naruto Shippuden Season 9 Eng Dub Torrent.md
deleted file mode 100644
index d8c5e3bf35791d176d9a03a103432475336bdc91..0000000000000000000000000000000000000000
--- a/spaces/diacanFperku/AutoGPT/NEW Download Naruto Shippuden Season 9 Eng Dub Torrent.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Download Naruto Shippuden Season 9 Eng Dub Torrent Download File ❤❤❤ https://gohhs.com/2uFTFK
-
-Download: Naruto Complete, Found: 33 Results, Updated: 08-Dec-2020. ... Naruto Shippuden English Dubbed Season 9 Complete $M@nI$, 7 years, TV, 21Â ... 4d29de3e1b
-
-
-
diff --git a/spaces/digitalxingtong/Jiuxia-Bert-Vits2/losses.py b/spaces/digitalxingtong/Jiuxia-Bert-Vits2/losses.py
deleted file mode 100644
index fb22a0e834dd87edaa37bb8190eee2c3c7abe0d5..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Jiuxia-Bert-Vits2/losses.py
+++ /dev/null
@@ -1,61 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import commons
-
-
-def feature_loss(fmap_r, fmap_g):
- loss = 0
- for dr, dg in zip(fmap_r, fmap_g):
- for rl, gl in zip(dr, dg):
- rl = rl.float().detach()
- gl = gl.float()
- loss += torch.mean(torch.abs(rl - gl))
-
- return loss * 2
-
-
-def discriminator_loss(disc_real_outputs, disc_generated_outputs):
- loss = 0
- r_losses = []
- g_losses = []
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
- dr = dr.float()
- dg = dg.float()
- r_loss = torch.mean((1-dr)**2)
- g_loss = torch.mean(dg**2)
- loss += (r_loss + g_loss)
- r_losses.append(r_loss.item())
- g_losses.append(g_loss.item())
-
- return loss, r_losses, g_losses
-
-
-def generator_loss(disc_outputs):
- loss = 0
- gen_losses = []
- for dg in disc_outputs:
- dg = dg.float()
- l = torch.mean((1-dg)**2)
- gen_losses.append(l)
- loss += l
-
- return loss, gen_losses
-
-
-def kl_loss(z_p, logs_q, m_p, logs_p, z_mask):
- """
- z_p, logs_q: [b, h, t_t]
- m_p, logs_p: [b, h, t_t]
- """
- z_p = z_p.float()
- logs_q = logs_q.float()
- m_p = m_p.float()
- logs_p = logs_p.float()
- z_mask = z_mask.float()
-
- kl = logs_p - logs_q - 0.5
- kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p)
- kl = torch.sum(kl * z_mask)
- l = kl / torch.sum(z_mask)
- return l
diff --git a/spaces/digitalxingtong/Nailv-read-Bert-Vits2/losses.py b/spaces/digitalxingtong/Nailv-read-Bert-Vits2/losses.py
deleted file mode 100644
index fb22a0e834dd87edaa37bb8190eee2c3c7abe0d5..0000000000000000000000000000000000000000
--- a/spaces/digitalxingtong/Nailv-read-Bert-Vits2/losses.py
+++ /dev/null
@@ -1,61 +0,0 @@
-import torch
-from torch.nn import functional as F
-
-import commons
-
-
-def feature_loss(fmap_r, fmap_g):
- loss = 0
- for dr, dg in zip(fmap_r, fmap_g):
- for rl, gl in zip(dr, dg):
- rl = rl.float().detach()
- gl = gl.float()
- loss += torch.mean(torch.abs(rl - gl))
-
- return loss * 2
-
-
-def discriminator_loss(disc_real_outputs, disc_generated_outputs):
- loss = 0
- r_losses = []
- g_losses = []
- for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
- dr = dr.float()
- dg = dg.float()
- r_loss = torch.mean((1-dr)**2)
- g_loss = torch.mean(dg**2)
- loss += (r_loss + g_loss)
- r_losses.append(r_loss.item())
- g_losses.append(g_loss.item())
-
- return loss, r_losses, g_losses
-
-
-def generator_loss(disc_outputs):
- loss = 0
- gen_losses = []
- for dg in disc_outputs:
- dg = dg.float()
- l = torch.mean((1-dg)**2)
- gen_losses.append(l)
- loss += l
-
- return loss, gen_losses
-
-
-def kl_loss(z_p, logs_q, m_p, logs_p, z_mask):
- """
- z_p, logs_q: [b, h, t_t]
- m_p, logs_p: [b, h, t_t]
- """
- z_p = z_p.float()
- logs_q = logs_q.float()
- m_p = m_p.float()
- logs_p = logs_p.float()
- z_mask = z_mask.float()
-
- kl = logs_p - logs_q - 0.5
- kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p)
- kl = torch.sum(kl * z_mask)
- l = kl / torch.sum(z_mask)
- return l
diff --git a/spaces/doevent/prompt-generator/app.py b/spaces/doevent/prompt-generator/app.py
deleted file mode 100644
index a535880ec59270152e13461316b7ad0f06004eb9..0000000000000000000000000000000000000000
--- a/spaces/doevent/prompt-generator/app.py
+++ /dev/null
@@ -1,56 +0,0 @@
-from transformers import pipeline, set_seed
-import gradio as grad
-import random
-import re
-
-gpt2_pipe = pipeline('text-generation', model='succinctly/text2image-prompt-generator')
-
-with open("name.txt", "r") as f:
- line = f.readlines()
-
-
-def generate(starting_text):
- for count in range(6):
- seed = random.randint(100, 1000000)
- set_seed(seed)
-
- # If the text field is empty
- if starting_text == "":
- starting_text: str = line[random.randrange(0, len(line))].replace("\n", "").lower().capitalize()
- starting_text: str = re.sub(r"[,:\-–.!;?_]", '', starting_text)
- print(starting_text)
-
- response = gpt2_pipe(starting_text, max_length=random.randint(60, 90), num_return_sequences=8)
- response_list = []
- for x in response:
- resp = x['generated_text'].strip()
- if resp != starting_text and len(resp) > (len(starting_text) + 4) and resp.endswith((":", "-", "—")) is False:
- response_list.append(resp)
-
- response_end = "\n".join(response_list)
- response_end = re.sub('[^ ]+\.[^ ]+','', response_end)
- response_end = response_end.replace("<", "").replace(">", "")
- if response_end != "":
- return response_end
- if count == 5:
- return response_end
-
-
-txt = grad.Textbox(lines=1, label="English", placeholder="English Text here")
-out = grad.Textbox(lines=6, label="Generated Text")
-examples = [["mythology of the Slavs"], ["All-seeing eye monitors these world"], ["astronaut dog"],
- ["A monochrome forest of ebony trees"], ["sad view of worker in office,"],
- ["Headshot photo portrait of John Lennon"], ["wide field with thousands of blue nemophila,"]]
-title = "Midjourney Prompt Generator"
-description = "This is an unofficial demo for Midjourney Prompt Generator. To use it, simply send your text, or click one of the examples to load them. Read more at the links below. Model: https://huggingface.co/succinctly/text2image-prompt-generator Telegram bot: https://t.me/prompt_generator_bot [](https://twitter.com/DoEvent)"
-article = ""
-
-grad.Interface(fn=generate,
- inputs=txt,
- outputs=out,
- examples=examples,
- title=title,
- description=description,
- article=article,
- allow_flagging='never',
- cache_examples=False).queue(concurrency_count=1, api_open=False).launch(show_api=False, show_error=True)
diff --git a/spaces/ds520/bingo/src/lib/bots/bing/tts.ts b/spaces/ds520/bingo/src/lib/bots/bing/tts.ts
deleted file mode 100644
index cd10b7d1d7581bf9cf46ff6755fcca550c558c9b..0000000000000000000000000000000000000000
--- a/spaces/ds520/bingo/src/lib/bots/bing/tts.ts
+++ /dev/null
@@ -1,82 +0,0 @@
-import { sleep } from './utils'
-
-const synth = window.speechSynthesis
-
-export class TTS {
- currentText = ''
- speakText = ''
- private controller = new AbortController()
- speaking = false
- get isSpeaking() {
- return this.speaking
- }
- finished = false
- constructor() {}
- abort = () => {
- this.controller.abort()
- }
-
- reset = () => {
- this.speaking = false
- this.finished = true
- this.currentText = ''
- this.speakText = ''
- this.abort()
- }
-
- speak = (text: string) => {
- if (!synth || text?.trim()?.length < 2) {
- return
- }
- this.currentText = text.replace(/[^\u4e00-\u9fa5_a-zA-Z0-9,。?,:;\.,:]+/g, '')
- this.finished = false
- this.loop()
- }
-
- private async doSpeek() {
- return new Promise((resolve) => {
- const endIndex = this.finished ? this.currentText.length :
- Math.max(
- this.currentText.lastIndexOf('。'),
- this.currentText.lastIndexOf(';'),
- this.currentText.lastIndexOf('、'),
- this.currentText.lastIndexOf('?'),
- this.currentText.lastIndexOf('\n')
- )
- const startIndex = this.speakText.length ? Math.max(0, this.currentText.lastIndexOf(this.speakText) + this.speakText.length) : 0
-
- if (startIndex >= endIndex) {
- return resolve(true)
- }
- const text = this.currentText.slice(startIndex, endIndex)
- this.speakText = text
- const utterThis = new SpeechSynthesisUtterance(text)
- this.controller.signal.onabort = () => {
- synth.cancel()
- this.finished = true
- resolve(false)
- }
-
- utterThis.onend = function (event) {
- resolve(true)
- }
-
- utterThis.onerror = function (event) {
- resolve(false)
- }
-
- const voice = synth.getVoices().find(v => v.name.includes('Microsoft Yunxi Online')) ?? null
- utterThis.voice = voice
- synth.speak(utterThis)
- })
- }
-
- private async loop() {
- if (this.speaking) return
- this.speaking = true
- while(!this.finished) {
- await Promise.all([sleep(1000), this.doSpeek()])
- }
- this.speaking = false
- }
-}
diff --git a/spaces/dvc890/go-chatgpt-api/Dockerfile b/spaces/dvc890/go-chatgpt-api/Dockerfile
deleted file mode 100644
index 92d9f513ea525f262e7a6dc00fd0803791b662fb..0000000000000000000000000000000000000000
--- a/spaces/dvc890/go-chatgpt-api/Dockerfile
+++ /dev/null
@@ -1,39 +0,0 @@
-# FROM linweiyuan/chatgpt-proxy-server-warp
-
-# ENV SUDO_USER_NAME dvc890
-
-# WORKDIR /app
-
-# RUN pacman -Sy --needed --noconfirm go
-# ENV PATH="/usr/local/go/bin:${PATH}"
-
-# COPY . .
-# RUN go build -ldflags="-w -s" -o go-chatgpt-api main.go
-
-# # RUN apk add --no-cache tzdata
-# # ENV TZ=Asia/Shanghai
-# EXPOSE 8080
-# EXPOSE 9515
-# EXPOSE 40000
-# EXPOSE 65535
-
-# ENV CHATGPT_PROXY_SERVER=http://localhost:9515
-# ENV GO_CHATGPT_API_PROXY=socks5://0.0.0.0:65535
-# ENV LOG_LEVEL=OFF
-
-# RUN mkdir -p /var/lib/cloudflare-warp
-
-# CMD ["bash", "-c", "/bin/bash /run.sh & sleep 3 && exec /app/go-chatgpt-api"]
-
-FROM golang:alpine AS builder
-WORKDIR /app
-COPY . .
-RUN go build -ldflags="-w -s" -o go-chatgpt-api main.go
-
-FROM alpine
-WORKDIR /app
-COPY --from=builder /app/go-chatgpt-api .
-RUN apk add --no-cache tzdata
-ENV TZ=Asia/Shanghai
-EXPOSE 8080
-CMD ["/app/go-chatgpt-api"]
\ No newline at end of file
diff --git a/spaces/emc348/faces-through-time/configs/__init__.py b/spaces/emc348/faces-through-time/configs/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/emc348/faces-through-time/criteria/l2_loss.py b/spaces/emc348/faces-through-time/criteria/l2_loss.py
deleted file mode 100644
index 098f3f24d3ae5b74d177aa0f81e80486a56e6acb..0000000000000000000000000000000000000000
--- a/spaces/emc348/faces-through-time/criteria/l2_loss.py
+++ /dev/null
@@ -1,14 +0,0 @@
-import torch
-import torchvision
-
-l2_criterion = torch.nn.MSELoss(reduction="mean")
-
-
-def l2_loss(real_images, generated_images, gray=False):
- if gray:
- real_images = torchvision.transforms.functional.rgb_to_grayscale(real_images)
- generated_images = torchvision.transforms.functional.rgb_to_grayscale(
- generated_images
- )
- loss = l2_criterion(real_images, generated_images)
- return loss
diff --git a/spaces/empy-ai/Token-classification/core/models/__init__.py b/spaces/empy-ai/Token-classification/core/models/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git "a/spaces/erbanku/gpt-academic/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py" "b/spaces/erbanku/gpt-academic/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py"
deleted file mode 100644
index e57f80f1d45bd3ec23837253848f7b32a5ccd751..0000000000000000000000000000000000000000
--- "a/spaces/erbanku/gpt-academic/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py"
+++ /dev/null
@@ -1,138 +0,0 @@
-import threading
-from request_llm.bridge_all import predict_no_ui_long_connection
-from toolbox import update_ui
-from toolbox import CatchException, write_results_to_file, report_execption
-from .crazy_utils import breakdown_txt_to_satisfy_token_limit
-
-def extract_code_block_carefully(txt):
- splitted = txt.split('```')
- n_code_block_seg = len(splitted) - 1
- if n_code_block_seg <= 1: return txt
- # 剩下的情况都开头除去 ``` 结尾除去一次 ```
- txt_out = '```'.join(splitted[1:-1])
- return txt_out
-
-
-
-def break_txt_into_half_at_some_linebreak(txt):
- lines = txt.split('\n')
- n_lines = len(lines)
- pre = lines[:(n_lines//2)]
- post = lines[(n_lines//2):]
- return "\n".join(pre), "\n".join(post)
-
-
-@CatchException
-def 全项目切换英文(txt, llm_kwargs, plugin_kwargs, chatbot, history, sys_prompt, web_port):
- # 第1步:清空历史,以免输入溢出
- history = []
-
- # 第2步:尝试导入依赖,如果缺少依赖,则给出安装建议
- try:
- import tiktoken
- except:
- report_execption(chatbot, history,
- a = f"解析项目: {txt}",
- b = f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade tiktoken```。")
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- return
-
- # 第3步:集合文件
- import time, glob, os, shutil, re
- os.makedirs('gpt_log/generated_english_version', exist_ok=True)
- os.makedirs('gpt_log/generated_english_version/crazy_functions', exist_ok=True)
- file_manifest = [f for f in glob.glob('./*.py') if ('test_project' not in f) and ('gpt_log' not in f)] + \
- [f for f in glob.glob('./crazy_functions/*.py') if ('test_project' not in f) and ('gpt_log' not in f)]
- # file_manifest = ['./toolbox.py']
- i_say_show_user_buffer = []
-
- # 第4步:随便显示点什么防止卡顿的感觉
- for index, fp in enumerate(file_manifest):
- # if 'test_project' in fp: continue
- with open(fp, 'r', encoding='utf-8', errors='replace') as f:
- file_content = f.read()
- i_say_show_user =f'[{index}/{len(file_manifest)}] 接下来请将以下代码中包含的所有中文转化为英文,只输出转化后的英文代码,请用代码块输出代码: {os.path.abspath(fp)}'
- i_say_show_user_buffer.append(i_say_show_user)
- chatbot.append((i_say_show_user, "[Local Message] 等待多线程操作,中间过程不予显示."))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
-
- # 第5步:Token限制下的截断与处理
- MAX_TOKEN = 3000
- from request_llm.bridge_all import model_info
- enc = model_info["gpt-3.5-turbo"]['tokenizer']
- def get_token_fn(txt): return len(enc.encode(txt, disallowed_special=()))
-
-
- # 第6步:任务函数
- mutable_return = [None for _ in file_manifest]
- observe_window = [[""] for _ in file_manifest]
- def thread_worker(fp,index):
- if index > 10:
- time.sleep(60)
- print('Openai 限制免费用户每分钟20次请求,降低请求频率中。')
- with open(fp, 'r', encoding='utf-8', errors='replace') as f:
- file_content = f.read()
- i_say_template = lambda fp, file_content: f'接下来请将以下代码中包含的所有中文转化为英文,只输出代码,文件名是{fp},文件代码是 ```{file_content}```'
- try:
- gpt_say = ""
- # 分解代码文件
- file_content_breakdown = breakdown_txt_to_satisfy_token_limit(file_content, get_token_fn, MAX_TOKEN)
- for file_content_partial in file_content_breakdown:
- i_say = i_say_template(fp, file_content_partial)
- # # ** gpt request **
- gpt_say_partial = predict_no_ui_long_connection(inputs=i_say, llm_kwargs=llm_kwargs, history=[], sys_prompt=sys_prompt, observe_window=observe_window[index])
- gpt_say_partial = extract_code_block_carefully(gpt_say_partial)
- gpt_say += gpt_say_partial
- mutable_return[index] = gpt_say
- except ConnectionAbortedError as token_exceed_err:
- print('至少一个线程任务Token溢出而失败', e)
- except Exception as e:
- print('至少一个线程任务意外失败', e)
-
- # 第7步:所有线程同时开始执行任务函数
- handles = [threading.Thread(target=thread_worker, args=(fp,index)) for index, fp in enumerate(file_manifest)]
- for h in handles:
- h.daemon = True
- h.start()
- chatbot.append(('开始了吗?', f'多线程操作已经开始'))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 第8步:循环轮询各个线程是否执行完毕
- cnt = 0
- while True:
- cnt += 1
- time.sleep(0.2)
- th_alive = [h.is_alive() for h in handles]
- if not any(th_alive): break
- # 更好的UI视觉效果
- observe_win = []
- for thread_index, alive in enumerate(th_alive):
- observe_win.append("[ ..."+observe_window[thread_index][0][-60:].replace('\n','').replace('```','...').replace(' ','.').replace(' ','.....').replace('$','.')+"... ]")
- stat = [f'执行中: {obs}\n\n' if alive else '已完成\n\n' for alive, obs in zip(th_alive, observe_win)]
- stat_str = ''.join(stat)
- chatbot[-1] = (chatbot[-1][0], f'多线程操作已经开始,完成情况: \n\n{stat_str}' + ''.join(['.']*(cnt%10+1)))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
-
- # 第9步:把结果写入文件
- for index, h in enumerate(handles):
- h.join() # 这里其实不需要join了,肯定已经都结束了
- fp = file_manifest[index]
- gpt_say = mutable_return[index]
- i_say_show_user = i_say_show_user_buffer[index]
-
- where_to_relocate = f'gpt_log/generated_english_version/{fp}'
- if gpt_say is not None:
- with open(where_to_relocate, 'w+', encoding='utf-8') as f:
- f.write(gpt_say)
- else: # 失败
- shutil.copyfile(file_manifest[index], where_to_relocate)
- chatbot.append((i_say_show_user, f'[Local Message] 已完成{os.path.abspath(fp)}的转化,\n\n存入{os.path.abspath(where_to_relocate)}'))
- history.append(i_say_show_user); history.append(gpt_say)
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
- time.sleep(1)
-
- # 第10步:备份一个文件
- res = write_results_to_file(history)
- chatbot.append(("生成一份任务执行报告", res))
- yield from update_ui(chatbot=chatbot, history=history) # 刷新界面
diff --git a/spaces/falterWliame/Face_Mask_Detection/Evermotion Archinteriors Vol 29 Free Download [REPACK].md b/spaces/falterWliame/Face_Mask_Detection/Evermotion Archinteriors Vol 29 Free Download [REPACK].md
deleted file mode 100644
index 6e61d19c7dda5f131c5df811e00720be5fa37db6..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Evermotion Archinteriors Vol 29 Free Download [REPACK].md
+++ /dev/null
@@ -1,8 +0,0 @@
-
-the data administrator is evermotion sc, ul. przdzalniana 8, 15-688 bialystok. personal data will be processed for promotional purposes by the newsletter. personal data will not be shared with other entities. every registered user has the right to access his/her personal data and correct it. collection of data is voluntary but necessary to achieve the said objectives.
-evermotion archmodels vol. 176 includes tree models, and also includes models with different shapes, sizes, colours, type, and also weight to meet user requirements. all the files are the best quality. you can also download easiestsoft movie editor.
-evermotion archinteriors vol 29 free download Download ››› https://urlca.com/2uDcqH
-archinteriors vol. 174 is a popular and widely used package which includes powerful, advanced and high-quality 3d models. the application makes the designer process simpler and quicker as they can headstart their projects by just importing the 3d model files, make some changes and achieve the desired results. this collection includes high-quality trees models with all the textures and materials. all objects are ready to use in your visualizations. you can also download cyberlink powerdirector ultimate 2020.
-we also placed free textures collections in this category. those are textures4ever they may seem a bit outdated sometimes, but still you get for free hundreds of textures that you can use in your private or commercial projects and we believe that many cg artists will find them useful. textures collections include also trees silhouettes (2d trees) that can be mapped onto planes in 3d software they are useful when you need some distant vegetation or you want to use them for shaping shadows (just place light source and a plane with tree silhouette and you can have great tree shadow casting on your surfaces at minimal performance hit). we included 736 high resolution alpha trees in png format. all alphas were made from rendered 3d trees. textures are 3500 pixels by 3500 pixels.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/falterWliame/Face_Mask_Detection/Lareinadelsurtemporada2completaportorrentversion.md b/spaces/falterWliame/Face_Mask_Detection/Lareinadelsurtemporada2completaportorrentversion.md
deleted file mode 100644
index 9f4f3b1abc22b2c8ae0fb27f495fcde3edd6c16b..0000000000000000000000000000000000000000
--- a/spaces/falterWliame/Face_Mask_Detection/Lareinadelsurtemporada2completaportorrentversion.md
+++ /dev/null
@@ -1,9 +0,0 @@
-
-at 11:32 am. frases cortas para seras queen isaura amazon. https://trello.com/c/j7FkcBmu/28-lareinadelsurtemporada2completaportorrentversion-detsal-nedrwarl. Results 1 - 16 of 422. https://melaninterest.com/pin/lareinadelsurtemporada2completaportorrentversion/ https://melaninterest.com/pin/lareinadelsurtemporada2completaportorrentversion/ https://coub.com/stories/2281526-link-lareinadelsurtemporada2completaportorrentversion-detsal https://melaninterest.com/pin/soundspectrum-aeon-platinum-full-18-geanelee/
-lareinadelsurtemporada2completaportorrentversion Download ⚹ https://urlca.com/2uDcjL
-https://trello.com/c/j7FkcBmu/28-lareinadelsurtemporada2completaportorrentversion-detsal-nedrwarl. https://melaninterest.com/pin/lareinadelsurtemporada2completaportorrentversion/ https://coub.com/stories/2281526-link-lareinadelsurtemporada2completaportorrentversion-detsal https://melaninterest.com/pin/soundspectrum-aeon-platinum-full-18-geanelee/ https://coub.com/stories/3034424-exclusive-lareinadelsurtemporada2completaportorrentversion
-From https://trello.com/c/j7FkcBmu/28-lareinadelsurtemporada2completaportorrentversion-detsal-nedrwarl https://coub.com/stories/2281526-link-lareinadelsurtemporada2completaportorrentversion-detsal https://coub.com/stories/3034424-exclusive-lareinadelsurtemporada2completaportorrentversion https://coub.com/stories/3034424-exclusive-lareinadelsurtemporada2completaportorrentversion. .
-https://trello.com/c/j7FkcBmu/28-lareinadelsurtemporada2completaportorrentversion-detsal-nedrwarl https://melaninterest.com/pin/lareinadelsurtemporada2completaportorrentversion/ https://melaninterest.com/pin/lareinadelsurtemporada2completaportorrentversion/ https://coub.
-
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/AZbul Oyunu Kelime Daarcnz Gelitiren Sz Oyunu ve Krossvord.md b/spaces/fatiXbelha/sd/AZbul Oyunu Kelime Daarcnz Gelitiren Sz Oyunu ve Krossvord.md
deleted file mode 100644
index 3d82c1ece8e045c3f4adfe6bbdf8b0b6d6315627..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/AZbul Oyunu Kelime Daarcnz Gelitiren Sz Oyunu ve Krossvord.md
+++ /dev/null
@@ -1,105 +0,0 @@
-
-AZbul Oyunu İndir: Kelime Bulmaca Oyunu Nasıl Oynanır?
- Kelime bulmaca oyunlarını seviyorsanız, size harika bir oyun önerimiz var: AZbul . Bu oyun, hem eğlenceli hem de zihin açıcı bir kelime oyunu deneyimi sunuyor. AZbul oyununu indirmek ve nasıl oynandığını öğrenmek için bu yazıyı okumaya devam edin.
-azbul oyunu indir Download File ► https://urllie.com/2uNzRH
- AZbul Oyunu Nedir?
- AZbul oyunu, %100 Türkçe olan ve çengel bulmaca ve çapraz bulmaca sevenler için özel olarak hazırlanmış bir kelime oyunudur. Bu oyunda, size verilen harfleri doğru sırayla kaydırarak gizli kelimeleri bulmanız ve bulmacayı çözmeniz gerekiyor. Her seviyede farklı kategorilerde kelimeler bulacaksınız. Örneğin, hayvanlar, ülkeler, şehirler, yiyecekler, sporlar, bitkiler ve daha fazlası.
- AZbul Oyununun Özellikleri
- AZbul oyununu diğer kelime oyunlarından ayıran bazı özellikler şunlardır:
-
-10.000'den fazla seviye ve binlerce kelime içerir.
-Size WOW dedirtecek zorluk seviyeleri sunar.
-Günlük ücretsiz çarkıfelek çevirme ile ekstra altın kazanabilirsiniz.
-Bonus kelimeleri bularak daha fazla para kazanabilirsiniz.
-İpuçları almak için altın kullanabilir veya video izleyerek altın kazanabilirsiniz.
-İnternetsiz de oynayabilirsiniz.
-Hafızanızı geliştirecek ve yeni kelimeler öğreneceksiniz.
-
- AZbul Oyununun Faydaları
- AZbul oyununu oynamak sadece eğlenceli değil, aynı zamanda birçok faydası da vardır. Bunlardan bazıları şunlardır:
-
-Kelime dağarcığınızı genişletir ve dil bilginizi arttırır.
-Zihninizi çalıştırır ve odaklanma yeteneğinizi geliştirir.
-Stresinizi azaltır ve rahatlamanızı sağlar.
-Boş zamanlarınız değerlendirebilirsiniz.
-Aile ve arkadaşlarınızla oynayarak eğlenceli vakit geçirebilirsiniz.
-
- AZbul Oyunu Nasıl İndirilir?
- AZbul oyununu indirmek için iki yöntem vardır: Google Play Store'dan indirmek veya APK dosyası ile indirmek. Her iki yöntemin de avantajları ve dezavantajları vardır. Hangisini tercih edeceğiniz size kalmış.
- Google Play Store'dan İndirme
- Bu yöntem, en kolay ve güvenli yöntemdir. Google Play Store'da AZbul oyununu aratın ve indir butonuna tıklayın. Oyun, otomatik olarak cihazınıza yüklenecektir. Bu yöntemin avantajı, oyunun güncellemelerini otomatik olarak alabilmenizdir. Dezavantajı ise, oyunun bazı ülkelerde Google Play Store'da bulunmamasıdır.
- APK Dosyası İle İndirme
- Bu yöntem, daha esnek ve alternatif bir yöntemdir. APK dosyası, bir uygulamanın kurulum dosyasıdır. AZbul oyununun APK dosyasını internetten bulabilir ve indirebilirsiniz. Ancak, bu işlemi yapmadan önce, cihazınızın ayarlarında bilinmeyen kaynaklardan uygulama yükleme seçeneğini açmanız gerekir. Aksi takdirde, cihazınız APK dosyasını kabul etmeyecektir. Bu yöntemin avantajı, oyunu istediğiniz zaman ve yerde indirebilmenizdir. Dezavantajı ise, oyunun güncellemelerini manuel olarak yapmanız ve güvenilir olmayan kaynaklardan indirdiğiniz için virüs riski taşımanızdır.
- AZbul Oyunu Nasıl Oynanır?
- AZbul oyununu oynamak çok basit ve eğlencelidir. Oyunun temel kuralları, seviyeleri, ipuçları, püf noktaları ve taktikleri hakkında bilgi almak için okumaya devam edin.
- Oyunun Temel Kuralları
- Oyunda size verilen harfleri doğru sırayla kaydırarak gizli kelimeleri bulmanız ve bulmacayı çözmeniz gerekiyor. Her seviyede farklı kategorilerde kelimeler bulacaksınız. Örneğin, hayvanlar, ülkeler, şehirler, yiyecekler, sporlar, bitkiler ve daha fazlası. Bulduğunuz her kelime için altın kazanacaksınız. Altınlarınızı ipuçları almak için kullanabilirsiniz. Eğer takılırsanız, ipucu butonuna tıklayarak bir harf açtırabilir veya video izleyerek harfleri karıştırabilirsiniz. Oyunda ilerledikçe seviyeler zorlaşacak ve daha fazla kelime bulmanız gerekecek.
- Oyunun Seviyeleri ve İpuçları
- Oyunda 10.000'den fazla seviye vardır. Her seviye farklı bir zorluk derecesine sahiptir. Bazı seviyelerde kolayca kelimeleri bulabilirken, bazı seviyelerde ise kafa yormanız gerekebilir. Oyunda ilerledikçe yeni kategoriler açılacak ve daha fazla kelime öğreneceksiniz. Oyunda ipuçları almak için altın kullanabilir veya video izleyerek altın kazanabilirsiniz. Ayrıca bonus kelimeleri bularak da ekstra altın elde edebilirsiniz.
-azbul söz oyunu ve krossvord indir
-azbul kelime bulmaca oyunu apk indir
-azbul söz oyunu internetsiz indir
-azbul kelime oyunu ücretsiz indir
-azbul söz oyunu android indir
-azbul kelime bulmaca oyunu hileli indir
-azbul söz oyunu güncel sürüm indir
-azbul kelime oyunu mod apk indir
-azbul söz oyunu pc indir
-azbul kelime bulmaca oyunu son sürüm indir
-azbul söz oyunu ios indir
-azbul kelime oyunu online indir
-azbul söz oyunu tablet indir
-azbul kelime bulmaca oyunu yükle indir
-azbul söz oyunu google play indir
-azbul kelime oyunu yeni seviyeler indir
-azbul söz oyunu windows 10 indir
-azbul kelime bulmaca oyunu nasıl indirilir
-azbul söz oyunu bilgisayar indir
-azbul kelime oyunu altın hilesi indir
-azbul söz oyunu mac indir
-azbul kelime bulmaca oyunu ipucu hilesi indir
-azbul söz oyunu laptop indir
-azbul kelime bulmaca oyunu günlük çarkıfelek indir
-azbul söz oyunu telefon indir
-azbul kelime bulmaca oyunu wow seviyeleri indir
-azbul söz oyunu samsung indir
-azbul kelime bulmaca oyunu bonus kelimeleri indir
-azbul söz oyunu huawei indir
-azbul kelime bulmaca oyunu beyin jimnastikası indir
-azbul söz oyunu xiaomi indir
-azbul kelime bulmaca oyunu çengel ve çapraz bulmaca indir
-azbul söz oyunu oppo indir
-azbul kelime bulmaca oyunu türkçe kelimeleri indir
-azbul söz oyunu vivo indir
-azbul kelime bulmaca oyunu zor kelimeleri indir
-azbul söz oyunu realme indir
-azbul kelime bulmaca oyunu yeni kelimeleri keşfetme indir
-azbul söz oyunu oneplus indir
-azbul kelime bulmaca oyunu eğlenceli vakit geçirmek için indir
- Oyunun Püf Noktaları ve Taktikleri Oyunda başarılı olmak için bazı püf noktaları ve taktikleri uygulayabilirsiniz. Bunlardan bazıları şunlardır:
-
-Harfleri farklı yönlere kaydırarak daha fazla kelime kombinasyonu bulmaya çalışın.
-Kelimeleri bulurken kategoriyi göz önünde bulundurun. Örneğin, hayvanlar kategorisinde aradığınız kelime bir hayvan adı olmalıdır.
-Bonus kelimeleri kaçırmayın. Bonus kelimeler, bulmacada olmayan ama harflerle oluşturulabilen kelimelerdir. Bu kelimeleri bularak ekstra altın kazanabilirsiniz.
-Altınlarınızı akıllıca kullanın. İpuçları almak için altın harcamadan önce, video izleyerek veya çarkıfelek çevirerek altın kazanmayı deneyin.
-Oyunu düzenli olarak oynayın. Oyunu her gün oynayarak hem becerinizi geliştirebilir hem de günlük bonusları alabilirsiniz.
-
- AZbul Oyunu İle İlgili Sık Sorulan Sorular
- AZbul oyunu ile ilgili merak ettiğiniz bazı soruların cevaplarını aşağıda bulabilirsiniz.
- AZbul Oyunu Ücretsiz Mi?
- Evet, AZbul oyunu tamamen ücretsizdir. Oyunu indirmek ve oynamak için herhangi bir ücret ödemeniz gerekmez. Ancak, oyun içi satın alımlar yaparak daha fazla altın veya ipucu satın alabilirsiniz.
- AZbul Oyunu İnternetsiz Oynanabilir Mi?
- Evet, AZbul oyunu internetsiz de oynanabilir. Oyunu indirdikten sonra internet bağlantınız olmasa bile oynamaya devam edebilirsiniz. Ancak, internet bağlantınız olduğunda oyunun güncellemelerini almanız ve video izleyerek altın kazanmanız mümkündür.
- AZbul Oyunu Türkçe Mi?
- Evet, AZbul oyunu %100 Türkçe'dir. Oyunda kullanılan tüm kelimeler, harfler ve menüler Türkçe'dir. Oyunda Türk kültürüne ve coğrafyasına ait kategoriler de bulunmaktadır.
- AZbul Oyunu Güvenli Mi?
- Evet, AZbul oyunu güvenlidir. Oyun, Google Play Store'da yer alan ve güvenilir bir geliştirici tarafından hazırlanan bir oyundur. Oyun, cihazınıza veya kişisel verilerinize zarar vermez veya erişmez. Oyunu güvenle indirebilir ve oynayabilirsiniz.
- AZbul Oyunu Geliştiricisi Kimdir?
- AZbul oyununun geliştiricisi, Türkiye'de faaliyet gösteren ve mobil oyunlar üreten bir şirket olan GameX Studio 'dur. GameX Studio, AZbul oyununun yanı sıra başka popüler oyunlar da geliştirmiştir. Örneğin, Word Connect , Word Cross , Word Search ve Word Swipe .
- Bu yazımızda, AZbul oyununu indirmek ve nasıl oynandığını öğrenmek için size bilgiler verdik. Umarız bu yazı size yardımcı olmuştur. AZbul oyununu indirmek için aşağıdaki linkleri kullanabilirsiniz:
-
- AZbul oyununu indirdikten sonra, kelime bulmaca oyununun keyfini çıkarmaya başlayabilirsiniz. Oyunu oynarken hem eğlenecek hem de zihninizi geliştireceksiniz. Oyunu arkadaşlarınızla ve ailenizle de paylaşabilir ve onlarla yarışabilirsiniz. AZbul oyunu, kelime oyunu tutkunları için kaçırılmayacak bir oyun. Hemen indirin ve oynamaya başlayın!
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fatiXbelha/sd/Download APK Traffic Racer The Ultimate Endless Racing Game.md b/spaces/fatiXbelha/sd/Download APK Traffic Racer The Ultimate Endless Racing Game.md
deleted file mode 100644
index e13989cc197a7ea6860b558719ce145fed9d2b05..0000000000000000000000000000000000000000
--- a/spaces/fatiXbelha/sd/Download APK Traffic Racer The Ultimate Endless Racing Game.md
+++ /dev/null
@@ -1,99 +0,0 @@
-
-Download APK Traffic Racer: A Guide to the Ultimate Endless Racing Game
-Do you love speed and adrenaline? Do you want to experience the thrill of driving through highway traffic, dodging cars, trucks, buses, and SUVs? Do you want to customize your car, upgrade its performance, and compete with other players around the world? If you answered yes to any of these questions, then you should download APK Traffic Racer, a milestone in the genre of endless arcade racing.
-download apk traffic racer Download File ★★★ https://urllie.com/2uNAkd
-Traffic Racer is a game developed by SK Games that can be downloaded for iOS or Android devices. It is one of the most popular and addictive racing games on the market, with over 100 million downloads and 6.3 million reviews on Google Play. In this game, you can drive your car through highway traffic, earn cash, upgrade your car, and buy new ones. You can also try different game modes, environments, and challenges to test your skills and have fun.
-In this article, we will show you how to download APK Traffic Racer for Android devices, how to install and play it on your device, what are the features of the game, and what are some tips and tricks to help you master it. So buckle up and get ready for some high-speed action!
- How to download APK Traffic Racer for Android devices
-If you want to download APK Traffic Racer for Android devices, you have two options. You can either download it from Google Play Store or from a third-party website. Here are the steps for both methods:
-download apk traffic racer mod unlimited money
-download apk traffic racer latest version
-download apk traffic racer for android
-download apk traffic racer hack
-download apk traffic racer offline
-download apk traffic racer 3d
-download apk traffic racer game
-download apk traffic racer 2
-download apk traffic racer mod apk
-download apk traffic racer unlimited coins
-download apk traffic racer free
-download apk traffic racer pro
-download apk traffic racer car racing game
-download apk traffic racer mod menu
-download apk traffic racer online
-download apk traffic racer 2023
-download apk traffic racer cheats
-download apk traffic racer full version
-download apk traffic racer for pc
-download apk traffic racer modded
-download apk traffic racer no ads
-download apk traffic racer premium
-download apk traffic racer realistic driving simulator
-download apk traffic racer unlocked all cars
-download apk traffic racer with unlimited nitro
-download apk traffic racer extreme racing game
-download apk traffic racer new update
-download apk traffic racer old version
-download apk traffic racer for ios
-download apk traffic racer cracked
-download apk traffic racer best racing game
-download apk traffic racer high graphics
-download apk traffic racer mega mod
-download apk traffic racer original
-download apk traffic racer super fast cars
-download apk traffic racer tips and tricks
-download apk traffic racer unlimited everything
-download apk traffic racer vip mod
-download apk traffic racer without internet connection
-download apk traffic racer 4k hd graphics
-
-Download from Google Play Store: This is the easiest and safest way to download APK Traffic Racer for Android devices. All you need to do is open Google Play Store on your device, search for Traffic Racer, and tap on Install. The game will be downloaded and installed automatically on your device. You can also use this link to go directly to the game page on Google Play Store.
-Download from a third-party website: This is an alternative way to download APK Traffic Racer for Android devices if you cannot access Google Play Store or if you want to get an older version of the game. However, this method is riskier as you may encounter malware or viruses that can harm your device. Therefore, you should only download from trusted and reputable websites that offer safe and verified APK files. To download from a third-party website, you need to follow these steps:
- Go to a website that offers APK files for Traffic Racer. For example, you can use this link to go to EmulatorPC.com.
- Find the download button or link for APK Traffic Racer and click on it.
- Wait for the download to finish and locate the APK file on your device.
-
-
-
- How to install and play APK Traffic Racer on your device
-Once you have downloaded APK Traffic Racer on your device, you need to install it before you can play it. Here are the steps for installing APK Traffic Racer on your device:
-
-Go to your device settings and enable Unknown Sources. This will allow you to install apps from sources other than Google Play Store.
-Locate the APK file that you downloaded on your device and tap on it.
-Follow the instructions on the screen to install APK Traffic Racer on your device. Once the installation is complete, you can launch APK Traffic Racer from your app drawer or home screen.
-
-Now that you have installed APK Traffic Racer on your device, you can start playing it and enjoy the endless racing game. Here are the basic controls and gameplay of APK Traffic Racer:
-
-Controls: You can control your car by tilting your device to steer left or right, tapping the gas pedal to accelerate, and tapping the brake pedal to slow down. You can also change the control mode from tilt to touch or swipe in the settings menu.
-Gameplay: You can choose from different game modes, such as Endless, Two-Way, Time Trial, Police Chase, and Free Ride. You can also select different environments, such as Suburb, Desert, Snowy, Rainy, and City Night. Your goal is to drive as fast as possible through the traffic without crashing into other vehicles. The faster you drive, the more points you get. You can also get bonus points by driving in the opposite direction in two-way mode, overtaking other cars closely, or driving over 100 km/h.
-
- The features of Traffic Racer game
-Traffic Racer is not just a simple racing game. It has many features that make it stand out from other games in the genre. Here are some of the features of Traffic Racer game that you can enjoy:
-
-Stunning 3D graphics and realistic car handling: Traffic Racer has amazing 3D graphics that create a realistic and immersive driving experience. You can see the details of the cars, the environments, the traffic, and the weather effects. You can also feel the difference between different car models and their handling, such as speed, acceleration, braking, and steering.
-40+ different cars to choose from and customize: Traffic Racer offers a wide range of cars to suit your preferences and style. You can choose from sedans, hatchbacks, sports cars, trucks, buses, and more. You can also customize your car by changing its color, wheels, paint job, and engine. You can upgrade your car's performance by increasing its speed, acceleration, handling, and braking.
-5 detailed environments and 5 game modes to enjoy: Traffic Racer has 5 different environments that each have their own characteristics and challenges. You can drive in Suburb, Desert, Snowy, Rainy, or City Night. Each environment has different weather conditions, traffic density, and scenery. You can also try 5 different game modes that each have their own objectives and rules. You can play in Endless mode where you drive as long as you can without crashing; Two-Way mode where you drive in both directions; Time Trial mode where you race against the clock; Police Chase mode where you evade the cops; or Free Ride mode where you explore the map freely.
-Rich types of NPC traffic and online leaderboards: Traffic Racer has a variety of NPC traffic that makes the game more realistic and challenging. You can encounter cars, trucks, buses, SUVs, vans, motorcycles, and more on the road. You have to avoid them or overtake them carefully to avoid collisions. You can also compete with other players around the world on online leaderboards. You can see your rank and score for each game mode and environment. You can also compare your stats with your friends and other players.
-
- The tips and tricks for Traffic Racer game
-Traffic Racer is a fun and addictive game that will keep you entertained for hours. However, it is not an easy game to master. You need to have good reflexes, skills, and strategies to succeed in this game. Here are some tips and tricks for Traffic Racer game that will help you improve your performance and score:
-
-How to earn more coins and upgrade your car: Coins are the currency of Traffic Racer game that you can use to buy new cars or upgrade your existing ones. You can earn coins by playing the game modes or watching ads. The amount of coins you earn depends on your score, speed, distance traveled , and bonuses. You can also get extra coins by completing achievements or daily missions. You should spend your coins wisely and upgrade your car regularly. You should focus on improving your car's speed, acceleration, handling, and braking, as these will affect your performance and score. You should also choose a car that suits your play style and preference.
-How to string close shaves and get bonus scores: Close shaves are when you pass very close to another vehicle without crashing. They are one of the best ways to increase your score and earn more coins. You can get bonus points for each close shave you make, and the bonus increases as you string more close shaves in a row. However, close shaves are also very risky and require good timing and precision. You should only attempt close shaves when you are confident and have enough space to maneuver. You should also avoid close shaves when you are driving in the opposite direction or in police chase mode, as these will increase the chances of crashing.
-How to challenge yourself in time trial mode: Time trial mode is one of the most difficult and rewarding game modes in Traffic Racer. In this mode, you have to drive as fast as possible and reach checkpoints before the time runs out. You can earn extra time by driving faster than 100 km/h or by making close shaves. However, you also have to deal with heavy traffic and tight turns that can slow you down or make you crash. To challenge yourself in time trial mode, you should choose a fast and agile car, such as a sports car or a motorcycle. You should also use the nitro boost wisely, as it can give you a speed boost but also make you lose control. You should also try different environments and routes to test your skills and adaptability.
-How to watch ads for free coins: Watching ads is another way to earn free coins in Traffic Racer game. You can watch ads by tapping on the video icon on the main menu or on the game over screen. You can watch up to 10 ads per day and earn 500 coins for each ad. However, watching ads can be boring and time-consuming, so you should only do it when you really need some extra coins or when you have nothing else to do.
-
- Conclusion: Summarize the main points and invite the reader to try the game
-Traffic Racer is an amazing endless racing game that will keep you hooked for hours. It has stunning 3D graphics, realistic car handling, 40+ different cars, 5 detailed environments, 5 game modes, rich types of NPC traffic, and online leaderboards. It is easy to download APK Traffic Racer for Android devices from Google Play Store or from a third-party website. It is also easy to install and play APK Traffic Racer on your device with simple controls and gameplay. However, it is not easy to master Traffic Racer game without some tips and tricks that will help you earn more coins, upgrade your car, string close shaves, get bonus scores, challenge yourself in time trial mode, and watch ads for free coins.
-If you are looking for a fun and addictive racing game that will test your skills and reflexes, then you should download APK Traffic Racer today and enjoy the ultimate endless racing game!
- FAQs: Answer some common questions about the game
-
-Question Answer
-What is the size of APK Traffic Racer? The size of APK Traffic Racer varies depending on the device and version. The latest version (3.3) on Google Play Store is about 60 MB.
-What are the minimum requirements for APK Traffic Racer? The minimum requirements for APK Traffic Racer are Android 4.1 or higher and at least 1 GB of RAM.
-Is APK Traffic Racer free to play? Yes, APK Traffic Racer is free to play with optional in-app purchases.
-Can I play APK Traffic Racer offline? Yes, you can play APK Traffic Racer offline without an internet connection.
-Can I play APK Traffic Racer with friends? No, APK Traffic Racer does not have a multiplayer mode. However, you can compete with other players on online leaderboards.
-
401be4b1e0
-
-
\ No newline at end of file
diff --git a/spaces/fclong/summary/fengshen/examples/classification/finetune_classification.sh b/spaces/fclong/summary/fengshen/examples/classification/finetune_classification.sh
deleted file mode 100644
index 993071ceb0ceeb44c0bf887abcdbc0c9f982c4d5..0000000000000000000000000000000000000000
--- a/spaces/fclong/summary/fengshen/examples/classification/finetune_classification.sh
+++ /dev/null
@@ -1,75 +0,0 @@
-#!/bin/bash
-#SBATCH --job-name=slurm-test # create a short name for your job
-#SBATCH --nodes=1 # node count
-#SBATCH --ntasks=1 # total number of tasks across all nodes
-#SBATCH --cpus-per-task=2 # cpu-cores per task (>1 if multi-threaded tasks)
-#SBATCH --mem-per-cpu=16G # memory per cpu-core (4G is default)
-#SBATCH --gres=gpu:1 # number of gpus per node
-#SBATCH --mail-type=ALL # send email when job begins, ends or failed etc.
-
-
-
-MODEL_TYPE=fengshen-roformer
-PRETRAINED_MODEL_PATH=IDEA-CCNL/Zhouwenwang-Unified-110M
-
-ROOT_PATH=cognitive_comp
-TASK=tnews
-
-DATA_DIR=/$ROOT_PATH/yangping/data/ChineseCLUE_DATA/${TASK}_public/
-CHECKPOINT_PATH=/$ROOT_PATH/yangping/checkpoints/modelevaluation/tnews/
-OUTPUT_PATH=/$ROOT_PATH/yangping/nlp/modelevaluation/output/predict.json
-
-DATA_ARGS="\
- --data_dir $DATA_DIR \
- --train_data train.json \
- --valid_data dev.json \
- --test_data test1.1.json \
- --train_batchsize 32 \
- --valid_batchsize 128 \
- --max_length 128 \
- --texta_name sentence \
- --label_name label \
- --id_name id \
- "
-
-MODEL_ARGS="\
- --learning_rate 0.00002 \
- --weight_decay 0.1 \
- --num_labels 15 \
- "
-
-MODEL_CHECKPOINT_ARGS="\
- --monitor val_acc \
- --save_top_k 3 \
- --mode max \
- --every_n_train_steps 100 \
- --save_weights_only True \
- --dirpath $CHECKPOINT_PATH \
- --filename model-{epoch:02d}-{val_acc:.4f} \
- "
-
-TRAINER_ARGS="\
- --max_epochs 7 \
- --gpus 1 \
- --check_val_every_n_epoch 1 \
- --val_check_interval 100 \
- --default_root_dir ./log/ \
- "
-
-
-options=" \
- --pretrained_model_path $PRETRAINED_MODEL_PATH \
- --output_save_path $OUTPUT_PATH \
- --model_type $MODEL_TYPE \
- $DATA_ARGS \
- $MODEL_ARGS \
- $MODEL_CHECKPOINT_ARGS \
- $TRAINER_ARGS \
- "
-
-DOCKER_PATH=/$ROOT_PATH/yangping/containers/pytorch21_06_py3_docker_image.sif
-SCRIPT_PATH=/$ROOT_PATH/yangping/nlp/Fengshenbang-LM/fengshen/examples/classification/finetune_classification.py
-
-python3 $SCRIPT_PATH $options
-# singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $DOCKER_PATH python3 $SCRIPT_PATH $options
-
diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Dig Deep Mod APK for Free Latest Version 1.3.4 on AN1.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Dig Deep Mod APK for Free Latest Version 1.3.4 on AN1.md
deleted file mode 100644
index 89013c9e1f0d475d079b9553a25611d403368b9a..0000000000000000000000000000000000000000
--- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Dig Deep Mod APK for Free Latest Version 1.3.4 on AN1.md
+++ /dev/null
@@ -1,107 +0,0 @@
-
-Dig Deep APK Mod An1: A Fun and Addictive Mining Game
-Do you love mining games? Do you want to become a rich tycoon by digging for treasure and gems? If yes, then you should try Dig Deep APK Mod An1 , a fun and addictive mining game that will keep you entertained for hours.
-dig deep apk mod an1 DOWNLOAD ►►► https://gohhs.com/2uPsq8
-Dig Deep APK Mod An1 is a modified version of the original game, Dig Deep, developed by APKCombo . In this game, you can go underground and dig deep for treasure, but don't get stuck down the hole. You can also hire workers and upgrade your equipment to boost your digging empire. This idle miner game is the best digging game there is.
-In this article, we will tell you how to play Dig Deep APK Mod An1, what are its features, how to download and install it, and some frequently asked questions. Let's get started!
- How to Play Dig Deep APK Mod An1
-Dig for treasure and gems underground
-The main goal of Dig Deep APK Mod An1 is to dig deep and find as many treasure and gems as possible. You can use your finger to swipe on the screen and control the direction of your drill. You can also tap on the screen to speed up your drill. As you dig deeper, you will encounter different types of soil, rocks, and obstacles that will affect your digging speed and durability. You will also find various items such as coins, diamonds, chests, keys, bombs, magnets, etc. that will help you in your digging adventure.
-dig deep mod apk unlimited money an1
-dig deep mod apk latest version an1
-dig deep mod apk download for android an1
-dig deep mod apk free shopping an1
-dig deep mod apk no ads an1
-dig deep mod apk offline an1
-dig deep mod apk unlimited gems an1
-dig deep mod apk hack an1
-dig deep mod apk 1.3.4 an1
-dig deep mod apk 2023 an1
-dig deep mod apk android 1.3.4 an1
-dig deep mod apk android 11 an1
-dig deep mod apk android 10 an1
-dig deep mod apk android 9 an1
-dig deep mod apk android 8 an1
-dig deep mod apk android 7 an1
-dig deep mod apk android 6 an1
-dig deep mod apk android 5 an1
-dig deep mod apk android 4.4 an1
-dig deep mod apk android 4.3 an1
-dig deep mod apk rexdl an1
-dig deep mod apk revdl an1
-dig deep mod apk happymod an1
-dig deep mod apk apkpure an1
-dig deep mod apk apkmody an1
-dig deep mod apk apknite an1
-dig deep mod apk apkmirror an1
-dig deep mod apk apksfree an1
-dig deep mod apk apksfull an1
-dig deep mod apk apksmodded an1
-dig deep mod apk mob.org an1
-dig deep mod apk mobpark an1
-dig deep mod apk mobdisc an1
-dig deep mod apk mobdro an1
-dig deep mod apk mobirix an1
-dig deep mod apk moboplay an1
-dig deep mod apk mobogenie an1
-dig deep mod apk moboapk an1
-dig deep mod apk mobomarket an1
-dig deep mod apk mobojoy an1
-Hire workers and upgrade your equipment
-As you dig deeper, you will earn money that you can use to hire workers and upgrade your equipment. Workers will help you dig faster and collect more items. You can hire different types of workers such as miners, engineers, explorers, etc. Each worker has a different skill and cost. You can also upgrade your drill, battery, storage, etc. to improve your digging performance. Upgrading your equipment will also unlock new areas to explore.
-Collect rewards and bonuses
-Dig Deep APK Mod An1 also offers various rewards and bonuses that will make your digging experience more fun and rewarding. You can collect daily rewards, achievements, quests, etc. that will give you extra money, diamonds, items, etc. You can also spin the wheel of fortune or play the slot machine to win more prizes. You can also watch videos or complete offers to get more rewards.
- Features of Dig Deep APK Mod An1
-Unlimited money and diamonds
-One of the best features of Dig Deep APK Mod An1 is that it gives you unlimited money and diamonds. This means that you can hire as many workers and upgrade as many equipment as you want without worrying about the cost. You can also buy more items and boosters to enhance your digging experience. With unlimited money and diamonds, you can enjoy the game without any limitations.
-No ads and no root required
-Another great feature of Dig Deep APK Mod An1 is that it has no ads and no root required. This means that you can play the game without any interruptions or distractions from annoying ads. You can also play the game without rooting your device, which can be risky and complicated. Dig Deep APK Mod An1 is safe and easy to use.
-Easy and intuitive controls
-Dig Deep APK Mod An1 also has easy and intuitive controls that make the game simple and fun to play. You can control your drill with just one finger, swiping and tapping on the screen. You can also access the menu, shop, settings, etc. with just a few clicks. The game also has a tutorial that will guide you through the basics of the game.
-Colorful graphics and sound effects
-Dig Deep APK Mod An1 also has colorful graphics and sound effects that make the game more appealing and immersive. The game has a cartoon-like style that is suitable for all ages. The game also has various sound effects that match the actions and events in the game. You can hear the drill, the coins, the explosions, etc. The game also has a cheerful background music that will keep you motivated.
-Offline mode and cloud save
-Dig Deep APK Mod An1 also has an offline mode and a cloud save feature that make the game more convenient and accessible. You can play the game offline without an internet connection, which is great for when you are traveling or have no wifi. You can also save your progress on the cloud, which is great for when you want to switch devices or reinstall the game. You can sync your progress across different devices with just a few steps.
- How to Download and Install Dig Deep APK Mod An1
-Download the APK file from a trusted source
-The first step to download and install Dig Deep APK Mod An1 is to download the APK file from a trusted source. You can use the link below to download the latest version of Dig Deep APK Mod An1 from An1.com , a reliable website that offers modded games and apps.
-Download Dig Deep APK Mod An1
-Enable unknown sources on your device
-The second step to download and install Dig Deep APK Mod An1 is to enable unknown sources on your device. This will allow you to install apps from sources other than the Google Play Store. To enable unknown sources, follow these steps:
-
-Go to your device's settings.
-Tap on security or privacy.
-Find and enable unknown sources or allow installation from unknown sources.
-
-Install the APK file and launch the game
-The third step to download and install Dig Deep APK Mod An1 is to install the APK file and launch the game. To install the APK file, follow these steps:
-
-Locate the downloaded APK file on your device's file manager or downloads folder.
-Tap on the APK file and follow the instructions on the screen.
-Wait for the installation to complete.
-Tap on the game icon on your home screen or app drawer.
-Enjoy playing Dig Deep APK Mod An1!
-
- Conclusion
-Dig Deep APK Mod An1 is a fun and addictive mining game that will keep you entertained for hours. You can dig deep for treasure and gems, hire workers and upgrade your equipment, collect rewards and bonuses, and enjoy unlimited money and diamonds. You can also play the game without ads, without root, with easy controls, with colorful graphics, and with offline mode and cloud save. Dig Deep APK Mod An1 is the best digging game there is.
-If you want to try Dig Deep APK Mod An1, you can download it from An1.com , a trusted source of modded games and apps. Just follow the steps above to download and install it on your device. Then, start digging deep and become a rich tycoon!
- FAQs
-What is the difference between Dig Deep APK Mod An1 and the original version?
-The difference between Dig Deep APK Mod An1 and the original version is that Dig Deep APK Mod An1 has unlimited money and diamonds, no ads, no root required, and some other features that make the game more fun and easy to play. The original version of Dig Deep is available on the Google Play Store, but it has limited money and diamonds, ads, and some other restrictions that may affect your gaming experience.
-Is Dig Deep APK Mod An1 safe to use?
-Yes, Dig Deep APK Mod An1 is safe to use. It does not contain any viruses, malware, or spyware that can harm your device or data. It also does not require root access, which can be risky and complicated. Dig Deep APK Mod An1 is tested and verified by An1.com , a reliable website that offers modded games and apps. However, you should always download the APK file from a trusted source and enable unknown sources on your device at your own risk.
-How can I get more money and diamonds in Dig Deep APK Mod An1?
-You can get more money and diamonds in Dig Deep APK Mod An1 by playing the game regularly and collecting the items that you find underground. You can also get more money and diamonds by completing the achievements, quests, daily rewards, etc. that the game offers. You can also spin the wheel of fortune or play the slot machine to win more prizes. You can also watch videos or complete offers to get more rewards. However, you don't need to worry about running out of money and diamonds in Dig Deep APK Mod An1, as it gives you unlimited money and diamonds from the start.
-What are the best strategies to play Dig Deep APK Mod An1?
-Some of the best strategies to play Dig Deep APK Mod An1 are:
-
-Dig deep and fast, but don't get stuck down the hole. You can use bombs or magnets to clear the obstacles or attract the items.
-Hire workers and upgrade your equipment as soon as possible. Workers will help you dig faster and collect more items. Upgrading your equipment will improve your digging performance and unlock new areas.
-Collect as many items as possible, especially coins, diamonds, chests, keys, etc. They will help you earn more money and buy more things.
-Use boosters and items wisely. Boosters and items can help you dig faster, collect more items, increase your storage, etc. But they have a limited time or quantity, so use them when you need them.
-Complete the achievements, quests, daily rewards, etc. They will give you extra money, diamonds, items, etc. that will make your digging experience more fun and rewarding.
-
-Can I play Dig Deep APK Mod An1 with my friends?
-Unfortunately, Dig Deep APK Mod An1 does not have a multiplayer mode or a social feature that allows you to play with your friends. However, you can still share your progress and achievements with your friends by taking screenshots or recording videos of your game. You can also compare your scores and rankings with other players on the leaderboard.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/data/audio_dataset.py b/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/data/audio_dataset.py
deleted file mode 100644
index cf21422ea0059cb2d6553f93e608b8f9fa0d3a50..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/data/audio_dataset.py
+++ /dev/null
@@ -1,525 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import argparse
-import copy
-from concurrent.futures import ThreadPoolExecutor, Future
-from dataclasses import dataclass, fields
-from contextlib import ExitStack
-import gzip
-import json
-import logging
-import os
-from pathlib import Path
-import random
-import sys
-import typing as tp
-
-import torch
-import torch.nn.functional as F
-
-from .audio import audio_read, audio_info
-from .audio_utils import convert_audio
-from .zip import PathInZip
-
-try:
- import dora
-except ImportError:
- dora = None # type: ignore
-
-
-@dataclass(order=True)
-class BaseInfo:
-
- @classmethod
- def _dict2fields(cls, dictionary: dict):
- return {
- field.name: dictionary[field.name]
- for field in fields(cls) if field.name in dictionary
- }
-
- @classmethod
- def from_dict(cls, dictionary: dict):
- _dictionary = cls._dict2fields(dictionary)
- return cls(**_dictionary)
-
- def to_dict(self):
- return {
- field.name: self.__getattribute__(field.name)
- for field in fields(self)
- }
-
-
-@dataclass(order=True)
-class AudioMeta(BaseInfo):
- path: str
- duration: float
- sample_rate: int
- amplitude: tp.Optional[float] = None
- weight: tp.Optional[float] = None
- # info_path is used to load additional information about the audio file that is stored in zip files.
- info_path: tp.Optional[PathInZip] = None
-
- @classmethod
- def from_dict(cls, dictionary: dict):
- base = cls._dict2fields(dictionary)
- if 'info_path' in base and base['info_path'] is not None:
- base['info_path'] = PathInZip(base['info_path'])
- return cls(**base)
-
- def to_dict(self):
- d = super().to_dict()
- if d['info_path'] is not None:
- d['info_path'] = str(d['info_path'])
- return d
-
-
-@dataclass(order=True)
-class SegmentInfo(BaseInfo):
- meta: AudioMeta
- seek_time: float
- n_frames: int # actual number of frames without padding
- total_frames: int # total number of frames, padding included
- sample_rate: int # actual sample rate
-
-
-DEFAULT_EXTS = ['.wav', '.mp3', '.flac', '.ogg', '.m4a']
-
-logger = logging.getLogger(__name__)
-
-
-def _get_audio_meta(file_path: str, minimal: bool = True) -> AudioMeta:
- """AudioMeta from a path to an audio file.
-
- Args:
- file_path (str): Resolved path of valid audio file.
- minimal (bool): Whether to only load the minimal set of metadata (takes longer if not).
- Returns:
- AudioMeta: Audio file path and its metadata.
- """
- info = audio_info(file_path)
- amplitude: tp.Optional[float] = None
- if not minimal:
- wav, sr = audio_read(file_path)
- amplitude = wav.abs().max().item()
- return AudioMeta(file_path, info.duration, info.sample_rate, amplitude)
-
-
-def _resolve_audio_meta(m: AudioMeta, fast: bool = True) -> AudioMeta:
- """If Dora is available as a dependency, try to resolve potential relative paths
- in list of AudioMeta. This method is expected to be used when loading meta from file.
-
- Args:
- m (AudioMeta): Audio meta to resolve.
- fast (bool): If True, uses a really fast check for determining if a file is already absolute or not.
- Only valid on Linux/Mac.
- Returns:
- AudioMeta: Audio meta with resolved path.
- """
- def is_abs(m):
- if fast:
- return str(m)[0] == '/'
- else:
- os.path.isabs(str(m))
-
- if not dora:
- return m
-
- if not is_abs(m.path):
- m.path = dora.git_save.to_absolute_path(m.path)
- if m.info_path is not None and not is_abs(m.info_path.zip_path):
- m.info_path.zip_path = dora.git_save.to_absolute_path(m.path)
- return m
-
-
-def find_audio_files(path: tp.Union[Path, str],
- exts: tp.List[str] = DEFAULT_EXTS,
- resolve: bool = True,
- minimal: bool = True,
- progress: bool = False,
- workers: int = 0) -> tp.List[AudioMeta]:
- """Build a list of AudioMeta from a given path,
- collecting relevant audio files and fetching meta info.
-
- Args:
- path (str or Path): Path to folder containing audio files.
- exts (list of str): List of file extensions to consider for audio files.
- minimal (bool): Whether to only load the minimal set of metadata (takes longer if not).
- progress (bool): Whether to log progress on audio files collection.
- workers (int): number of parallel workers, if 0, use only the current thread.
- Returns:
- List[AudioMeta]: List of audio file path and its metadata.
- """
- audio_files = []
- futures: tp.List[Future] = []
- pool: tp.Optional[ThreadPoolExecutor] = None
- with ExitStack() as stack:
- if workers > 0:
- pool = ThreadPoolExecutor(workers)
- stack.enter_context(pool)
-
- if progress:
- print("Finding audio files...")
- for root, folders, files in os.walk(path, followlinks=True):
- for file in files:
- full_path = Path(root) / file
- if full_path.suffix.lower() in exts:
- audio_files.append(full_path)
- if pool is not None:
- futures.append(pool.submit(_get_audio_meta, str(audio_files[-1]), minimal))
- if progress:
- print(format(len(audio_files), " 8d"), end='\r', file=sys.stderr)
-
- if progress:
- print("Getting audio metadata...")
- meta: tp.List[AudioMeta] = []
- for idx, file_path in enumerate(audio_files):
- try:
- if pool is None:
- m = _get_audio_meta(str(file_path), minimal)
- else:
- m = futures[idx].result()
- if resolve:
- m = _resolve_audio_meta(m)
- except Exception as err:
- print("Error with", str(file_path), err, file=sys.stderr)
- continue
- meta.append(m)
- if progress:
- print(format((1 + idx) / len(audio_files), " 3.1%"), end='\r', file=sys.stderr)
- meta.sort()
- return meta
-
-
-def load_audio_meta(path: tp.Union[str, Path],
- resolve: bool = True, fast: bool = True) -> tp.List[AudioMeta]:
- """Load list of AudioMeta from an optionally compressed json file.
-
- Args:
- path (str or Path): Path to JSON file.
- resolve (bool): Whether to resolve the path from AudioMeta (default=True).
- fast (bool): activates some tricks to make things faster.
- Returns:
- List[AudioMeta]: List of audio file path and its total duration.
- """
- open_fn = gzip.open if str(path).lower().endswith('.gz') else open
- with open_fn(path, 'rb') as fp: # type: ignore
- lines = fp.readlines()
- meta = []
- for line in lines:
- d = json.loads(line)
- m = AudioMeta.from_dict(d)
- if resolve:
- m = _resolve_audio_meta(m, fast=fast)
- meta.append(m)
- return meta
-
-
-def save_audio_meta(path: tp.Union[str, Path], meta: tp.List[AudioMeta]):
- """Save the audio metadata to the file pointer as json.
-
- Args:
- path (str or Path): Path to JSON file.
- metadata (list of BaseAudioMeta): List of audio meta to save.
- """
- Path(path).parent.mkdir(exist_ok=True, parents=True)
- open_fn = gzip.open if str(path).lower().endswith('.gz') else open
- with open_fn(path, 'wb') as fp: # type: ignore
- for m in meta:
- json_str = json.dumps(m.to_dict()) + '\n'
- json_bytes = json_str.encode('utf-8')
- fp.write(json_bytes)
-
-
-class AudioDataset:
- """Base audio dataset.
-
- The dataset takes a list of AudioMeta and create a dataset composed of segments of audio
- and potentially additional information, by creating random segments from the list of audio
- files referenced in the metadata and applying minimal data pre-processing such as resampling,
- mixing of channels, padding, etc.
-
- If no segment_duration value is provided, the AudioDataset will return the full wav for each
- audio file. Otherwise, it will randomly sample audio files and create a segment of the specified
- duration, applying padding if required.
-
- By default, only the torch Tensor corresponding to the waveform is returned. Setting return_info=True
- allows to return a tuple containing the torch Tensor and additional metadata on the segment and the
- original audio meta.
-
- Args:
- meta (tp.List[AudioMeta]): List of audio files metadata.
- segment_duration (float): Optional segment duration of audio to load.
- If not specified, the dataset will load the full audio segment from the file.
- shuffle (bool): Set to `True` to have the data reshuffled at every epoch.
- sample_rate (int): Target sample rate of the loaded audio samples.
- channels (int): Target number of channels of the loaded audio samples.
- sample_on_duration (bool): Set to `True` to sample segments with probability
- dependent on audio file duration. This is only used if `segment_duration` is provided.
- sample_on_weight (bool): Set to `True` to sample segments using the `weight` entry of
- `AudioMeta`. If `sample_on_duration` is also True, the actual weight will be the product
- of the file duration and file weight. This is only used if `segment_duration` is provided.
- min_segment_ratio (float): Minimum segment ratio to use when the audio file
- is shorter than the desired segment.
- max_read_retry (int): Maximum number of retries to sample an audio segment from the dataset.
- return_info (bool): Whether to return the wav only or return wav along with segment info and metadata.
- min_audio_duration (tp.Optional[float], optional): Minimum audio file duration, in seconds, if provided
- audio shorter than this will be filtered out.
- max_audio_duration (tp.Optional[float], optional): Maximal audio file duration in seconds, if provided
- audio longer than this will be filtered out.
- """
- def __init__(self,
- meta: tp.List[AudioMeta],
- segment_duration: tp.Optional[float] = None,
- shuffle: bool = True,
- num_samples: int = 10_000,
- sample_rate: int = 48_000,
- channels: int = 2,
- pad: bool = True,
- sample_on_duration: bool = True,
- sample_on_weight: bool = True,
- min_segment_ratio: float = 0.5,
- max_read_retry: int = 10,
- return_info: bool = False,
- min_audio_duration: tp.Optional[float] = None,
- max_audio_duration: tp.Optional[float] = None
- ):
- assert len(meta) > 0, 'No audio meta provided to AudioDataset. Please check loading of audio meta.'
- assert segment_duration is None or segment_duration > 0
- assert segment_duration is None or min_segment_ratio >= 0
- logging.debug(f'sample_on_duration: {sample_on_duration}')
- logging.debug(f'sample_on_weight: {sample_on_weight}')
- logging.debug(f'pad: {pad}')
- logging.debug(f'min_segment_ratio: {min_segment_ratio}')
-
- self.segment_duration = segment_duration
- self.min_segment_ratio = min_segment_ratio
- self.max_audio_duration = max_audio_duration
- self.min_audio_duration = min_audio_duration
- if self.min_audio_duration is not None and self.max_audio_duration is not None:
- assert self.min_audio_duration <= self.max_audio_duration
- self.meta: tp.List[AudioMeta] = self._filter_duration(meta)
- assert len(self.meta) # Fail fast if all data has been filtered.
- self.total_duration = sum(d.duration for d in self.meta)
-
- if segment_duration is None:
- num_samples = len(self.meta)
- self.num_samples = num_samples
- self.shuffle = shuffle
- self.sample_rate = sample_rate
- self.channels = channels
- self.pad = pad
- self.sample_on_weight = sample_on_weight
- self.sample_on_duration = sample_on_duration
- self.sampling_probabilities = self._get_sampling_probabilities()
- self.max_read_retry = max_read_retry
- self.return_info = return_info
-
- def __len__(self):
- return self.num_samples
-
- def _get_sampling_probabilities(self, normalized: bool = True):
- """Return the sampling probabilities for each file inside `self.meta`.
- """
- scores: tp.List[float] = []
- for file_meta in self.meta:
- score = 1.
- if self.sample_on_weight and file_meta.weight is not None:
- score *= file_meta.weight
- if self.sample_on_duration:
- score *= file_meta.duration
- scores.append(score)
- probabilities = torch.tensor(scores)
- if normalized:
- probabilities /= probabilities.sum()
- return probabilities
-
- def sample_file(self, rng: torch.Generator) -> AudioMeta:
- """Sample a given file from `self.meta`. Can be overriden in subclasses.
- This is only called if `segment_duration` is not None.
-
- You must use the provided random number generator `rng` for reproducibility.
- """
- if not self.sample_on_weight and not self.sample_on_duration:
- file_index = int(torch.randint(len(self.sampling_probabilities), (1,), generator=rng).item())
- else:
- file_index = int(torch.multinomial(self.sampling_probabilities, 1, generator=rng).item())
-
- return self.meta[file_index]
-
- def __getitem__(self, index: int) -> tp.Union[torch.Tensor, tp.Tuple[torch.Tensor, SegmentInfo]]:
- if self.segment_duration is None:
- file_meta = self.meta[index]
- out, sr = audio_read(file_meta.path)
- out = convert_audio(out, sr, self.sample_rate, self.channels)
- n_frames = out.shape[-1]
- segment_info = SegmentInfo(file_meta, seek_time=0., n_frames=n_frames, total_frames=n_frames,
- sample_rate=self.sample_rate)
- else:
- rng = torch.Generator()
- if self.shuffle:
- # We use index, plus extra randomness
- rng.manual_seed(index + self.num_samples * random.randint(0, 2**24))
- else:
- # We only use index
- rng.manual_seed(index)
-
- for retry in range(self.max_read_retry):
- file_meta = self.sample_file(rng)
- # We add some variance in the file position even if audio file is smaller than segment
- # without ending up with empty segments
- max_seek = max(0, file_meta.duration - self.segment_duration * self.min_segment_ratio)
- seek_time = torch.rand(1, generator=rng).item() * max_seek
- try:
- out, sr = audio_read(file_meta.path, seek_time, self.segment_duration, pad=False)
- out = convert_audio(out, sr, self.sample_rate, self.channels)
- n_frames = out.shape[-1]
- target_frames = int(self.segment_duration * self.sample_rate)
- if self.pad:
- out = F.pad(out, (0, target_frames - n_frames))
- segment_info = SegmentInfo(file_meta, seek_time, n_frames=n_frames, total_frames=target_frames,
- sample_rate=self.sample_rate)
- except Exception as exc:
- logger.warning("Error opening file %s: %r", file_meta.path, exc)
- if retry == self.max_read_retry - 1:
- raise
- else:
- break
-
- if self.return_info:
- # Returns the wav and additional information on the wave segment
- return out, segment_info
- else:
- return out
-
- def collater(self, samples):
- """The collater function has to be provided to the dataloader
- if AudioDataset has return_info=True in order to properly collate
- the samples of a batch.
- """
- if self.segment_duration is None and len(samples) > 1:
- assert self.pad, "Must allow padding when batching examples of different durations."
-
- # In this case the audio reaching the collater is of variable length as segment_duration=None.
- to_pad = self.segment_duration is None and self.pad
- if to_pad:
- max_len = max([wav.shape[-1] for wav, _ in samples])
-
- def _pad_wav(wav):
- return F.pad(wav, (0, max_len - wav.shape[-1]))
-
- if self.return_info:
- if len(samples) > 0:
- assert len(samples[0]) == 2
- assert isinstance(samples[0][0], torch.Tensor)
- assert isinstance(samples[0][1], SegmentInfo)
-
- wavs = [wav for wav, _ in samples]
- segment_infos = [copy.deepcopy(info) for _, info in samples]
-
- if to_pad:
- # Each wav could be of a different duration as they are not segmented.
- for i in range(len(samples)):
- # Determines the total legth of the signal with padding, so we update here as we pad.
- segment_infos[i].total_frames = max_len
- wavs[i] = _pad_wav(wavs[i])
-
- wav = torch.stack(wavs)
- return wav, segment_infos
- else:
- assert isinstance(samples[0], torch.Tensor)
- if to_pad:
- samples = [_pad_wav(s) for s in samples]
- return torch.stack(samples)
-
- def _filter_duration(self, meta: tp.List[AudioMeta]) -> tp.List[AudioMeta]:
- """Filters out audio files with short durations.
- Removes from meta files that have durations that will not allow to samples examples from them.
- """
- orig_len = len(meta)
-
- # Filter data that is too short.
- if self.min_audio_duration is not None:
- meta = [m for m in meta if m.duration >= self.min_audio_duration]
-
- # Filter data that is too long.
- if self.max_audio_duration is not None:
- meta = [m for m in meta if m.duration <= self.max_audio_duration]
-
- filtered_len = len(meta)
- removed_percentage = 100*(1-float(filtered_len)/orig_len)
- msg = 'Removed %.2f percent of the data because it was too short or too long.' % removed_percentage
- if removed_percentage < 10:
- logging.debug(msg)
- else:
- logging.warning(msg)
- return meta
-
- @classmethod
- def from_meta(cls, root: tp.Union[str, Path], **kwargs):
- """Instantiate AudioDataset from a path to a directory containing a manifest as a jsonl file.
-
- Args:
- root (str or Path): Path to root folder containing audio files.
- kwargs: Additional keyword arguments for the AudioDataset.
- """
- root = Path(root)
- if root.is_dir():
- if (root / 'data.jsonl').exists():
- root = root / 'data.jsonl'
- elif (root / 'data.jsonl.gz').exists():
- root = root / 'data.jsonl.gz'
- else:
- raise ValueError("Don't know where to read metadata from in the dir. "
- "Expecting either a data.jsonl or data.jsonl.gz file but none found.")
- meta = load_audio_meta(root)
- return cls(meta, **kwargs)
-
- @classmethod
- def from_path(cls, root: tp.Union[str, Path], minimal_meta: bool = True,
- exts: tp.List[str] = DEFAULT_EXTS, **kwargs):
- """Instantiate AudioDataset from a path containing (possibly nested) audio files.
-
- Args:
- root (str or Path): Path to root folder containing audio files.
- minimal_meta (bool): Whether to only load minimal metadata or not.
- exts (list of str): Extensions for audio files.
- kwargs: Additional keyword arguments for the AudioDataset.
- """
- root = Path(root)
- if root.is_file():
- meta = load_audio_meta(root, resolve=True)
- else:
- meta = find_audio_files(root, exts, minimal=minimal_meta, resolve=True)
- return cls(meta, **kwargs)
-
-
-def main():
- logging.basicConfig(stream=sys.stderr, level=logging.INFO)
- parser = argparse.ArgumentParser(
- prog='audio_dataset',
- description='Generate .jsonl files by scanning a folder.')
- parser.add_argument('root', help='Root folder with all the audio files')
- parser.add_argument('output_meta_file',
- help='Output file to store the metadata, ')
- parser.add_argument('--complete',
- action='store_false', dest='minimal', default=True,
- help='Retrieve all metadata, even the one that are expansive '
- 'to compute (e.g. normalization).')
- parser.add_argument('--resolve',
- action='store_true', default=False,
- help='Resolve the paths to be absolute and with no symlinks.')
- parser.add_argument('--workers',
- default=10, type=int,
- help='Number of workers.')
- args = parser.parse_args()
- meta = find_audio_files(args.root, DEFAULT_EXTS, progress=True,
- resolve=args.resolve, minimal=args.minimal, workers=args.workers)
- save_audio_meta(args.output_meta_file, meta)
-
-
-if __name__ == '__main__':
- main()
diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-inspect/example/circular.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-inspect/example/circular.js
deleted file mode 100644
index 487a7c169d0df8c4acb6ad02b26ce76175ecfc0f..0000000000000000000000000000000000000000
--- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/object-inspect/example/circular.js
+++ /dev/null
@@ -1,6 +0,0 @@
-'use strict';
-
-var inspect = require('../');
-var obj = { a: 1, b: [3, 4] };
-obj.c = obj;
-console.log(inspect(obj));
diff --git a/spaces/firdavsyorkulov/delivery_project_fastapi/Dockerfile b/spaces/firdavsyorkulov/delivery_project_fastapi/Dockerfile
deleted file mode 100644
index b742a1870b92ce033b776c0defec1a9996889d50..0000000000000000000000000000000000000000
--- a/spaces/firdavsyorkulov/delivery_project_fastapi/Dockerfile
+++ /dev/null
@@ -1,11 +0,0 @@
-FROM python:3.9
-
-WORKDIR /code
-
-COPY ./requirements.txt /code/requirements.txt
-
-RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
-
-COPY . .
-
-CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "7860"]
\ No newline at end of file
diff --git a/spaces/flf/8983/bin/unidbg-fetch-qsign.bat b/spaces/flf/8983/bin/unidbg-fetch-qsign.bat
deleted file mode 100644
index 8b291e7303b0c07d14b714e5795473891363c85b..0000000000000000000000000000000000000000
--- a/spaces/flf/8983/bin/unidbg-fetch-qsign.bat
+++ /dev/null
@@ -1,89 +0,0 @@
-@rem
-@rem Copyright 2015 the original author or authors.
-@rem
-@rem Licensed under the Apache License, Version 2.0 (the "License");
-@rem you may not use this file except in compliance with the License.
-@rem You may obtain a copy of the License at
-@rem
-@rem https://www.apache.org/licenses/LICENSE-2.0
-@rem
-@rem Unless required by applicable law or agreed to in writing, software
-@rem distributed under the License is distributed on an "AS IS" BASIS,
-@rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-@rem See the License for the specific language governing permissions and
-@rem limitations under the License.
-@rem
-
-@if "%DEBUG%" == "" @echo off
-@rem ##########################################################################
-@rem
-@rem unidbg-fetch-qsign startup script for Windows
-@rem
-@rem ##########################################################################
-
-@rem Set local scope for the variables with windows NT shell
-if "%OS%"=="Windows_NT" setlocal
-
-set DIRNAME=%~dp0
-if "%DIRNAME%" == "" set DIRNAME=.
-set APP_BASE_NAME=%~n0
-set APP_HOME=%DIRNAME%..
-
-@rem Resolve any "." and ".." in APP_HOME to make it shorter.
-for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi
-
-@rem Add default JVM options here. You can also use JAVA_OPTS and UNIDBG_FETCH_QSIGN_OPTS to pass JVM options to this script.
-set DEFAULT_JVM_OPTS=
-
-@rem Find java.exe
-if defined JAVA_HOME goto findJavaFromJavaHome
-
-set JAVA_EXE=java.exe
-%JAVA_EXE% -version >NUL 2>&1
-if "%ERRORLEVEL%" == "0" goto execute
-
-echo.
-echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
-echo.
-echo Please set the JAVA_HOME variable in your environment to match the
-echo location of your Java installation.
-
-goto fail
-
-:findJavaFromJavaHome
-set JAVA_HOME=%JAVA_HOME:"=%
-set JAVA_EXE=%JAVA_HOME%/bin/java.exe
-
-if exist "%JAVA_EXE%" goto execute
-
-echo.
-echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME%
-echo.
-echo Please set the JAVA_HOME variable in your environment to match the
-echo location of your Java installation.
-
-goto fail
-
-:execute
-@rem Setup the command line
-
-set CLASSPATH=%APP_HOME%\lib\unidbg-fetch-qsign-1.1.9.jar;%APP_HOME%\lib\unidbg-android-105.jar;%APP_HOME%\lib\ktor-server-content-negotiation-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-json-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-status-pages-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-netty-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-host-common-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-core-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-events-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-websockets-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-cio-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-network-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-utils-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-io-jvm-2.3.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk8-1.8.22.jar;%APP_HOME%\lib\kotlinx-serialization-json-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-protobuf-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-core-jvm-1.5.1.jar;%APP_HOME%\lib\logback-classic-1.2.11.jar;%APP_HOME%\lib\kotlinx-coroutines-jdk8-1.7.1.jar;%APP_HOME%\lib\kotlinx-coroutines-core-jvm-1.7.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk7-1.8.22.jar;%APP_HOME%\lib\kotlin-reflect-1.8.10.jar;%APP_HOME%\lib\kotlin-stdlib-1.8.22.jar;%APP_HOME%\lib\slf4j-api-1.7.36.jar;%APP_HOME%\lib\kotlin-stdlib-common-1.8.22.jar;%APP_HOME%\lib\config-1.4.2.jar;%APP_HOME%\lib\jansi-2.4.0.jar;%APP_HOME%\lib\netty-codec-http2-4.1.92.Final.jar;%APP_HOME%\lib\alpn-api-1.1.3.v20160715.jar;%APP_HOME%\lib\netty-transport-native-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-epoll-4.1.92.Final.jar;%APP_HOME%\lib\logback-core-1.2.11.jar;%APP_HOME%\lib\annotations-23.0.0.jar;%APP_HOME%\lib\netty-codec-http-4.1.92.Final.jar;%APP_HOME%\lib\netty-handler-4.1.92.Final.jar;%APP_HOME%\lib\netty-codec-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-epoll-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-unix-common-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-4.1.92.Final.jar;%APP_HOME%\lib\netty-buffer-4.1.92.Final.jar;%APP_HOME%\lib\netty-resolver-4.1.92.Final.jar;%APP_HOME%\lib\netty-common-4.1.92.Final.jar
-
-
-@rem Execute unidbg-fetch-qsign
-"%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %UNIDBG_FETCH_QSIGN_OPTS% -classpath "%CLASSPATH%" MainKt %*
-
-:end
-@rem End local scope for the variables with windows NT shell
-if "%ERRORLEVEL%"=="0" goto mainEnd
-
-:fail
-rem Set variable UNIDBG_FETCH_QSIGN_EXIT_CONSOLE if you need the _script_ return code instead of
-rem the _cmd.exe /c_ return code!
-if not "" == "%UNIDBG_FETCH_QSIGN_EXIT_CONSOLE%" exit 1
-exit /b 1
-
-:mainEnd
-if "%OS%"=="Windows_NT" endlocal
-
-:omega
diff --git a/spaces/flokabukie/Sepsis-status-prediction-fast-api/main.py b/spaces/flokabukie/Sepsis-status-prediction-fast-api/main.py
deleted file mode 100644
index 55f3517129b783f1fa0172656ed4762c961f5da5..0000000000000000000000000000000000000000
--- a/spaces/flokabukie/Sepsis-status-prediction-fast-api/main.py
+++ /dev/null
@@ -1,70 +0,0 @@
-from fastapi import FastAPI
-from pydantic import BaseModel
-import pickle
-import pandas as pd
-import numpy as np
-import uvicorn
-import os
-from sklearn.preprocessing import StandardScaler
-import joblib
-
-
-
-app = FastAPI(title="API")
-
-
-"""We load a machine learning model and a scaler that help us make predictions based on data."""
-model = joblib.load('model.pkl',mmap_mode='r')
-scaler = joblib.load('scaler.pkl',mmap_mode='r')
-
-
-def predict(df, endpoint='simple'):
- # Scaling
- scaled_df = scaler.transform(df)
-
- # Prediction
- prediction = model.predict_proba(scaled_df)
- highest_proba = prediction.max(axis=1)
-
- predicted_labels = ["Patient does not have sepsis" if i == 0 else "Patient has Sepsis" for i in highest_proba]
- response = []
- for label, proba in zip(predicted_labels, highest_proba):
- output = {
- "prediction": label,
- "probability of prediction": str(round(proba * 100)) + '%'
- }
- response.append(output)
- return response
-
-
-
-
-class Patient(BaseModel):
- Blood_Work_R1: float
- Blood_Pressure: float
- Blood_Work_R3: float
- BMI: float
- Blood_Work_R4: float
- Patient_age: int
-
-
-
-
-@app.get("/")
-def root():
- return {"API": "This is an API for sepsis prediction."}
-
-# Prediction endpoint (Where we will input our features)
-@app.post("/predict")
-def predict_sepsis(patient: Patient):
-
- # Make prediction
- data = pd.DataFrame(patient.dict(), index=[0])
- scaled_data = scaler.transform(data)
- parsed = predict(df=scaled_data)
- return {"output": parsed}
-
-
-if __name__ == "__main__":
- os.environ["DEBUG"] = "True" # Enable debug mode
- uvicorn.run("main:app", reload=True)
diff --git a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/blocks_outputs/run.py b/spaces/freddyaboulton/3.1.4.9-all-demos/demos/blocks_outputs/run.py
deleted file mode 100644
index 084be0da9c4b676974e06a2cd7dc87f4b11f95e0..0000000000000000000000000000000000000000
--- a/spaces/freddyaboulton/3.1.4.9-all-demos/demos/blocks_outputs/run.py
+++ /dev/null
@@ -1,95 +0,0 @@
-import gradio as gr
-
-
-def make_markdown():
- return [
- [
- "# hello again",
- "Hello my name is frank, I am liking the small turtle you have there. It would be a shame if it went missing.",
- ' ',
- ],
- [
- "## hello again again",
- "Hello my name is frank, I am liking the small turtle you have there. It would be a shame if it went missing.",
- ' ',
- ],
- [
- "### hello thrice",
- "Hello my name is frank, I am liking the small turtle you have there. It would be a shame if it went missing.",
- ' ',
- ],
- ]
-
-
-with gr.Blocks() as demo:
- with gr.Column():
- txt = gr.Textbox(label="Small Textbox", lines=1, show_label=False)
- txt = gr.Textbox(label="Large Textbox", lines=5, show_label=False)
- num = gr.Number(label="Number", show_label=False)
- check = gr.Checkbox(label="Checkbox", show_label=False)
- check_g = gr.CheckboxGroup(
- label="Checkbox Group", choices=["One", "Two", "Three"], show_label=False
- )
- radio = gr.Radio(
- label="Radio", choices=["One", "Two", "Three"], show_label=False
- )
- drop = gr.Dropdown(
- label="Dropdown", choices=["One", "Two", "Three"], show_label=False
- )
- slider = gr.Slider(label="Slider", show_label=False)
- audio = gr.Audio(show_label=False)
- file = gr.File(show_label=False)
- video = gr.Video(show_label=False)
- image = gr.Image(show_label=False)
- ts = gr.Timeseries(show_label=False)
- df = gr.Dataframe(show_label=False)
- html = gr.HTML(show_label=False)
- json = gr.JSON(show_label=False)
- md = gr.Markdown(show_label=False)
- label = gr.Label(show_label=False)
- highlight = gr.HighlightedText(show_label=False)
- gr.Dataframe(interactive=True, col_count=(3, "fixed"), label="Dataframe")
- gr.Dataframe(interactive=True, col_count=4, label="Dataframe")
- gr.Dataframe(
- interactive=True, headers=["One", "Two", "Three", "Four"], label="Dataframe"
- )
- gr.Dataframe(
- interactive=True,
- headers=["One", "Two", "Three", "Four"],
- col_count=(4, "fixed"),
- row_count=(7, "fixed"),
- value=[[0, 0, 0, 0]],
- label="Dataframe",
- )
- gr.Dataframe(
- interactive=True, headers=["One", "Two", "Three", "Four"], col_count=4
- )
- df = gr.DataFrame(
- [
- [
- "# hello",
- "Hello my name is frank, I am liking the small turtle you have there. It would be a shame if it went missing.",
- ' ',
- ],
- [
- "## hello",
- "Hello my name is frank, I am liking the small turtle you have there. It would be a shame if it went missing.",
- ' ',
- ],
- [
- "### hello",
- "Hello my name is frank, I am liking the small turtle you have there. It would be a shame if it went missing.",
- ' ',
- ],
- ],
- headers=["One", "Two", "Three"],
- wrap=True,
- datatype=["markdown", "markdown", "html"],
- interactive=True,
- )
- btn = gr.Button("Run")
- btn.click(fn=make_markdown, inputs=None, outputs=df)
-
-
-if __name__ == "__main__":
- demo.launch()
diff --git a/spaces/fuckyoudeki/AutoGPT/autogpt/promptgenerator.py b/spaces/fuckyoudeki/AutoGPT/autogpt/promptgenerator.py
deleted file mode 100644
index 0ad7046a0c41dab356abcd0151b65890e5544cd2..0000000000000000000000000000000000000000
--- a/spaces/fuckyoudeki/AutoGPT/autogpt/promptgenerator.py
+++ /dev/null
@@ -1,138 +0,0 @@
-""" A module for generating custom prompt strings."""
-from __future__ import annotations
-
-import json
-from typing import Any
-
-
-class PromptGenerator:
- """
- A class for generating custom prompt strings based on constraints, commands,
- resources, and performance evaluations.
- """
-
- def __init__(self) -> None:
- """
- Initialize the PromptGenerator object with empty lists of constraints,
- commands, resources, and performance evaluations.
- """
- self.constraints = []
- self.commands = []
- self.resources = []
- self.performance_evaluation = []
- self.response_format = {
- "thoughts": {
- "text": "thought",
- "reasoning": "reasoning",
- "plan": "- short bulleted\n- list that conveys\n- long-term plan",
- "criticism": "constructive self-criticism",
- "speak": "thoughts summary to say to user",
- },
- "command": {"name": "command name", "args": {"arg name": "value"}},
- }
-
- def add_constraint(self, constraint: str) -> None:
- """
- Add a constraint to the constraints list.
-
- Args:
- constraint (str): The constraint to be added.
- """
- self.constraints.append(constraint)
-
- def add_command(self, command_label: str, command_name: str, args=None) -> None:
- """
- Add a command to the commands list with a label, name, and optional arguments.
-
- Args:
- command_label (str): The label of the command.
- command_name (str): The name of the command.
- args (dict, optional): A dictionary containing argument names and their
- values. Defaults to None.
- """
- if args is None:
- args = {}
-
- command_args = {arg_key: arg_value for arg_key, arg_value in args.items()}
-
- command = {
- "label": command_label,
- "name": command_name,
- "args": command_args,
- }
-
- self.commands.append(command)
-
- def _generate_command_string(self, command: dict[str, Any]) -> str:
- """
- Generate a formatted string representation of a command.
-
- Args:
- command (dict): A dictionary containing command information.
-
- Returns:
- str: The formatted command string.
- """
- args_string = ", ".join(
- f'"{key}": "{value}"' for key, value in command["args"].items()
- )
- return f'{command["label"]}: "{command["name"]}", args: {args_string}'
-
- def add_resource(self, resource: str) -> None:
- """
- Add a resource to the resources list.
-
- Args:
- resource (str): The resource to be added.
- """
- self.resources.append(resource)
-
- def add_performance_evaluation(self, evaluation: str) -> None:
- """
- Add a performance evaluation item to the performance_evaluation list.
-
- Args:
- evaluation (str): The evaluation item to be added.
- """
- self.performance_evaluation.append(evaluation)
-
- def _generate_numbered_list(self, items: list[Any], item_type="list") -> str:
- """
- Generate a numbered list from given items based on the item_type.
-
- Args:
- items (list): A list of items to be numbered.
- item_type (str, optional): The type of items in the list.
- Defaults to 'list'.
-
- Returns:
- str: The formatted numbered list.
- """
- if item_type == "command":
- return "\n".join(
- f"{i+1}. {self._generate_command_string(item)}"
- for i, item in enumerate(items)
- )
- else:
- return "\n".join(f"{i+1}. {item}" for i, item in enumerate(items))
-
- def generate_prompt_string(self) -> str:
- """
- Generate a prompt string based on the constraints, commands, resources,
- and performance evaluations.
-
- Returns:
- str: The generated prompt string.
- """
- formatted_response_format = json.dumps(self.response_format, indent=4)
- return (
- f"Constraints:\n{self._generate_numbered_list(self.constraints)}\n\n"
- "Commands:\n"
- f"{self._generate_numbered_list(self.commands, item_type='command')}\n\n"
- f"Resources:\n{self._generate_numbered_list(self.resources)}\n\n"
- "Performance Evaluation:\n"
- f"{self._generate_numbered_list(self.performance_evaluation)}\n\n"
- "You should only respond in JSON format as described below \nResponse"
- f" Format: \n{formatted_response_format} \nEnsure the response can be"
- " parsed by Python json.loads"
- )
diff --git a/spaces/fun-research/FC-CLIP/fcclip/data/datasets/register_pascal_voc_21_semantic.py b/spaces/fun-research/FC-CLIP/fcclip/data/datasets/register_pascal_voc_21_semantic.py
deleted file mode 100644
index 0321e66b41ac16d6a925967e51246d64cae8d050..0000000000000000000000000000000000000000
--- a/spaces/fun-research/FC-CLIP/fcclip/data/datasets/register_pascal_voc_21_semantic.py
+++ /dev/null
@@ -1,64 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-import os
-
-import numpy as np
-
-from detectron2.data import DatasetCatalog, MetadataCatalog
-from detectron2.data.datasets import load_sem_seg
-
-from . import openseg_classes
-
-PASCAL_VOC_21_CATEGORIES = openseg_classes.get_pascal_21_categories_with_prompt_eng()
-
-PASCAL_VOC_21_COLORS = [k["color"] for k in PASCAL_VOC_21_CATEGORIES]
-
-MetadataCatalog.get("openvocab_pascal21_sem_seg_train").set(
- stuff_colors=PASCAL_VOC_21_COLORS[:],
-)
-
-MetadataCatalog.get("openvocab_pascal21_sem_seg_val").set(
- stuff_colors=PASCAL_VOC_21_COLORS[:],
-)
-
-
-def _get_pascal21_meta():
- # Id 0 is reserved for ignore_label, we change ignore_label for 0
- # to 255 in our pre-processing, so all ids are shifted by 1.
- stuff_ids = [k["id"] for k in PASCAL_VOC_21_CATEGORIES]
- assert len(stuff_ids) == 21, len(stuff_ids)
-
- # For semantic segmentation, this mapping maps from contiguous stuff id
- # (in [0, 91], used in models) to ids in the dataset (used for processing results)
- stuff_dataset_id_to_contiguous_id = {k: i for i, k in enumerate(stuff_ids)}
- stuff_classes = [k["name"] for k in PASCAL_VOC_21_CATEGORIES]
-
- ret = {
- "stuff_dataset_id_to_contiguous_id": stuff_dataset_id_to_contiguous_id,
- "stuff_classes": stuff_classes,
- }
- return ret
-
-
-def register_all_pascal21(root):
- root = os.path.join(root, "pascal_voc_d2")
- meta = _get_pascal21_meta()
- for name, dirname in [("train", "training"), ("val", "validation")]:
- image_dir = os.path.join(root, "images", dirname)
- gt_dir = os.path.join(root, "annotations_pascal21", dirname)
- name = f"openvocab_pascal21_sem_seg_{name}"
- DatasetCatalog.register(
- name, lambda x=image_dir, y=gt_dir: load_sem_seg(y, x, gt_ext="png", image_ext="jpg")
- )
- MetadataCatalog.get(name).set(
- stuff_classes=meta["stuff_classes"][:],
- thing_dataset_id_to_contiguous_id={}, # to make Mask2Former happy
- stuff_dataset_id_to_contiguous_id=meta["stuff_dataset_id_to_contiguous_id"],
- image_root=image_dir,
- sem_seg_root=gt_dir,
- evaluator_type="sem_seg",
- ignore_label=255,
- gt_ext="png",
- )
-
-_root = os.getenv("DETECTRON2_DATASETS", "datasets")
-register_all_pascal21(_root)
\ No newline at end of file
diff --git a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/losses/_functional.py b/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/losses/_functional.py
deleted file mode 100644
index 74301e6d701a884ff0af8300816afc73f6814486..0000000000000000000000000000000000000000
--- a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/losses/_functional.py
+++ /dev/null
@@ -1,290 +0,0 @@
-import math
-import numpy as np
-
-from typing import Optional
-
-import torch
-import torch.nn.functional as F
-
-__all__ = [
- "focal_loss_with_logits",
- "softmax_focal_loss_with_logits",
- "soft_jaccard_score",
- "soft_dice_score",
- "wing_loss",
-]
-
-
-def to_tensor(x, dtype=None) -> torch.Tensor:
- if isinstance(x, torch.Tensor):
- if dtype is not None:
- x = x.type(dtype)
- return x
- if isinstance(x, np.ndarray):
- x = torch.from_numpy(x)
- if dtype is not None:
- x = x.type(dtype)
- return x
- if isinstance(x, (list, tuple)):
- x = np.array(x)
- x = torch.from_numpy(x)
- if dtype is not None:
- x = x.type(dtype)
- return x
-
-
-def focal_loss_with_logits(
- output: torch.Tensor,
- target: torch.Tensor,
- gamma: float = 2.0,
- alpha: Optional[float] = 0.25,
- reduction: str = "mean",
- normalized: bool = False,
- reduced_threshold: Optional[float] = None,
- eps: float = 1e-6,
-) -> torch.Tensor:
- """Compute binary focal loss between target and output logits.
- See :class:`~pytorch_toolbelt.losses.FocalLoss` for details.
-
- Args:
- output: Tensor of arbitrary shape (predictions of the model)
- target: Tensor of the same shape as input
- gamma: Focal loss power factor
- alpha: Weight factor to balance positive and negative samples. Alpha must be in [0...1] range,
- high values will give more weight to positive class.
- reduction (string, optional): Specifies the reduction to apply to the output:
- 'none' | 'mean' | 'sum' | 'batchwise_mean'. 'none': no reduction will be applied,
- 'mean': the sum of the output will be divided by the number of
- elements in the output, 'sum': the output will be summed. Note: :attr:`size_average`
- and :attr:`reduce` are in the process of being deprecated, and in the meantime,
- specifying either of those two args will override :attr:`reduction`.
- 'batchwise_mean' computes mean loss per sample in batch. Default: 'mean'
- normalized (bool): Compute normalized focal loss (https://arxiv.org/pdf/1909.07829.pdf).
- reduced_threshold (float, optional): Compute reduced focal loss (https://arxiv.org/abs/1903.01347).
-
- References:
- https://github.com/open-mmlab/mmdetection/blob/master/mmdet/core/loss/losses.py
- """
- target = target.type(output.type())
-
- logpt = F.binary_cross_entropy_with_logits(output, target, reduction="none")
- pt = torch.exp(-logpt)
-
- # compute the loss
- if reduced_threshold is None:
- focal_term = (1.0 - pt).pow(gamma)
- else:
- focal_term = ((1.0 - pt) / reduced_threshold).pow(gamma)
- focal_term[pt < reduced_threshold] = 1
-
- loss = focal_term * logpt
-
- if alpha is not None:
- loss *= alpha * target + (1 - alpha) * (1 - target)
-
- if normalized:
- norm_factor = focal_term.sum().clamp_min(eps)
- loss /= norm_factor
-
- if reduction == "mean":
- loss = loss.mean()
- if reduction == "sum":
- loss = loss.sum()
- if reduction == "batchwise_mean":
- loss = loss.sum(0)
-
- return loss
-
-
-def softmax_focal_loss_with_logits(
- output: torch.Tensor,
- target: torch.Tensor,
- gamma: float = 2.0,
- reduction="mean",
- normalized=False,
- reduced_threshold: Optional[float] = None,
- eps: float = 1e-6,
-) -> torch.Tensor:
- """Softmax version of focal loss between target and output logits.
- See :class:`~pytorch_toolbelt.losses.FocalLoss` for details.
-
- Args:
- output: Tensor of shape [B, C, *] (Similar to nn.CrossEntropyLoss)
- target: Tensor of shape [B, *] (Similar to nn.CrossEntropyLoss)
- reduction (string, optional): Specifies the reduction to apply to the output:
- 'none' | 'mean' | 'sum' | 'batchwise_mean'. 'none': no reduction will be applied,
- 'mean': the sum of the output will be divided by the number of
- elements in the output, 'sum': the output will be summed. Note: :attr:`size_average`
- and :attr:`reduce` are in the process of being deprecated, and in the meantime,
- specifying either of those two args will override :attr:`reduction`.
- 'batchwise_mean' computes mean loss per sample in batch. Default: 'mean'
- normalized (bool): Compute normalized focal loss (https://arxiv.org/pdf/1909.07829.pdf).
- reduced_threshold (float, optional): Compute reduced focal loss (https://arxiv.org/abs/1903.01347).
- """
- log_softmax = F.log_softmax(output, dim=1)
-
- loss = F.nll_loss(log_softmax, target, reduction="none")
- pt = torch.exp(-loss)
-
- # compute the loss
- if reduced_threshold is None:
- focal_term = (1.0 - pt).pow(gamma)
- else:
- focal_term = ((1.0 - pt) / reduced_threshold).pow(gamma)
- focal_term[pt < reduced_threshold] = 1
-
- loss = focal_term * loss
-
- if normalized:
- norm_factor = focal_term.sum().clamp_min(eps)
- loss = loss / norm_factor
-
- if reduction == "mean":
- loss = loss.mean()
- if reduction == "sum":
- loss = loss.sum()
- if reduction == "batchwise_mean":
- loss = loss.sum(0)
-
- return loss
-
-
-def soft_jaccard_score(
- output: torch.Tensor,
- target: torch.Tensor,
- smooth: float = 0.0,
- eps: float = 1e-7,
- dims=None,
-) -> torch.Tensor:
- assert output.size() == target.size()
- if dims is not None:
- intersection = torch.sum(output * target, dim=dims)
- cardinality = torch.sum(output + target, dim=dims)
- else:
- intersection = torch.sum(output * target)
- cardinality = torch.sum(output + target)
-
- union = cardinality - intersection
- jaccard_score = (intersection + smooth) / (union + smooth).clamp_min(eps)
- return jaccard_score
-
-
-def soft_dice_score(
- output: torch.Tensor,
- target: torch.Tensor,
- smooth: float = 0.0,
- eps: float = 1e-7,
- dims=None,
-) -> torch.Tensor:
- assert output.size() == target.size()
- if dims is not None:
- intersection = torch.sum(output * target, dim=dims)
- cardinality = torch.sum(output + target, dim=dims)
- else:
- intersection = torch.sum(output * target)
- cardinality = torch.sum(output + target)
- dice_score = (2.0 * intersection + smooth) / (cardinality + smooth).clamp_min(eps)
- return dice_score
-
-
-def soft_tversky_score(
- output: torch.Tensor,
- target: torch.Tensor,
- alpha: float,
- beta: float,
- smooth: float = 0.0,
- eps: float = 1e-7,
- dims=None,
-) -> torch.Tensor:
- assert output.size() == target.size()
- if dims is not None:
- intersection = torch.sum(output * target, dim=dims) # TP
- fp = torch.sum(output * (1.0 - target), dim=dims)
- fn = torch.sum((1 - output) * target, dim=dims)
- else:
- intersection = torch.sum(output * target) # TP
- fp = torch.sum(output * (1.0 - target))
- fn = torch.sum((1 - output) * target)
-
- tversky_score = (intersection + smooth) / (
- intersection + alpha * fp + beta * fn + smooth
- ).clamp_min(eps)
- return tversky_score
-
-
-def wing_loss(
- output: torch.Tensor, target: torch.Tensor, width=5, curvature=0.5, reduction="mean"
-):
- """Wing loss
-
- References:
- https://arxiv.org/pdf/1711.06753.pdf
-
- """
- diff_abs = (target - output).abs()
- loss = diff_abs.clone()
-
- idx_smaller = diff_abs < width
- idx_bigger = diff_abs >= width
-
- loss[idx_smaller] = width * torch.log(1 + diff_abs[idx_smaller] / curvature)
-
- C = width - width * math.log(1 + width / curvature)
- loss[idx_bigger] = loss[idx_bigger] - C
-
- if reduction == "sum":
- loss = loss.sum()
-
- if reduction == "mean":
- loss = loss.mean()
-
- return loss
-
-
-def label_smoothed_nll_loss(
- lprobs: torch.Tensor,
- target: torch.Tensor,
- epsilon: float,
- ignore_index=None,
- reduction="mean",
- dim=-1,
-) -> torch.Tensor:
- """NLL loss with label smoothing
-
- References:
- https://github.com/pytorch/fairseq/blob/master/fairseq/criterions/label_smoothed_cross_entropy.py
-
- Args:
- lprobs (torch.Tensor): Log-probabilities of predictions (e.g after log_softmax)
-
- """
- if target.dim() == lprobs.dim() - 1:
- target = target.unsqueeze(dim)
-
- if ignore_index is not None:
- pad_mask = target.eq(ignore_index)
- target = target.masked_fill(pad_mask, 0)
- nll_loss = -lprobs.gather(dim=dim, index=target)
- smooth_loss = -lprobs.sum(dim=dim, keepdim=True)
-
- # nll_loss.masked_fill_(pad_mask, 0.0)
- # smooth_loss.masked_fill_(pad_mask, 0.0)
- nll_loss = nll_loss.masked_fill(pad_mask, 0.0)
- smooth_loss = smooth_loss.masked_fill(pad_mask, 0.0)
- else:
- nll_loss = -lprobs.gather(dim=dim, index=target)
- smooth_loss = -lprobs.sum(dim=dim, keepdim=True)
-
- nll_loss = nll_loss.squeeze(dim)
- smooth_loss = smooth_loss.squeeze(dim)
-
- if reduction == "sum":
- nll_loss = nll_loss.sum()
- smooth_loss = smooth_loss.sum()
- if reduction == "mean":
- nll_loss = nll_loss.mean()
- smooth_loss = smooth_loss.mean()
-
- eps_i = epsilon / lprobs.size(dim)
- loss = (1.0 - epsilon) * nll_loss + eps_i * smooth_loss
- return loss
diff --git a/spaces/giridharvaruganti/facial-keypoints-detection/README.md b/spaces/giridharvaruganti/facial-keypoints-detection/README.md
deleted file mode 100644
index 101791f094524b1205ff8b2eb8372cb9a20ca632..0000000000000000000000000000000000000000
--- a/spaces/giridharvaruganti/facial-keypoints-detection/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Facial Keypoints Detection
-emoji: 🔥
-colorFrom: gray
-colorTo: yellow
-sdk: gradio
-sdk_version: 3.39.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git "a/spaces/giswqs/Streamlit/pages/12_\360\237\214\262_Land_Cover_Mapping.py" "b/spaces/giswqs/Streamlit/pages/12_\360\237\214\262_Land_Cover_Mapping.py"
deleted file mode 100644
index 4daa1664fd106b791a7a48d88cf9855cd8b17655..0000000000000000000000000000000000000000
--- "a/spaces/giswqs/Streamlit/pages/12_\360\237\214\262_Land_Cover_Mapping.py"
+++ /dev/null
@@ -1,111 +0,0 @@
-import datetime
-import ee
-import streamlit as st
-import geemap.foliumap as geemap
-
-st.set_page_config(layout="wide")
-
-st.sidebar.info(
- """
- - Web App URL:
- - GitHub repository:
- """
-)
-
-st.sidebar.title("Contact")
-st.sidebar.info(
- """
- Qiusheng Wu at [wetlands.io](https://wetlands.io) | [GitHub](https://github.com/giswqs) | [Twitter](https://twitter.com/giswqs) | [YouTube](https://www.youtube.com/c/QiushengWu) | [LinkedIn](https://www.linkedin.com/in/qiushengwu)
- """
-)
-
-st.title("Comparing Global Land Cover Maps")
-
-col1, col2 = st.columns([4, 1])
-
-Map = geemap.Map()
-Map.add_basemap("ESA WorldCover 2020 S2 FCC")
-Map.add_basemap("ESA WorldCover 2020 S2 TCC")
-Map.add_basemap("HYBRID")
-
-esa = ee.ImageCollection("ESA/WorldCover/v100").first()
-esa_vis = {"bands": ["Map"]}
-
-
-esri = ee.ImageCollection(
- "projects/sat-io/open-datasets/landcover/ESRI_Global-LULC_10m"
-).mosaic()
-esri_vis = {
- "min": 1,
- "max": 10,
- "palette": [
- "#1A5BAB",
- "#358221",
- "#A7D282",
- "#87D19E",
- "#FFDB5C",
- "#EECFA8",
- "#ED022A",
- "#EDE9E4",
- "#F2FAFF",
- "#C8C8C8",
- ],
-}
-
-
-markdown = """
- - [Dynamic World Land Cover](https://developers.google.com/earth-engine/datasets/catalog/GOOGLE_DYNAMICWORLD_V1?hl=en)
- - [ESA Global Land Cover](https://developers.google.com/earth-engine/datasets/catalog/ESA_WorldCover_v100)
- - [ESRI Global Land Cover](https://samapriya.github.io/awesome-gee-community-datasets/projects/esrilc2020)
-
-"""
-
-with col2:
-
- longitude = st.number_input("Longitude", -180.0, 180.0, -89.3998)
- latitude = st.number_input("Latitude", -90.0, 90.0, 43.0886)
- zoom = st.number_input("Zoom", 0, 20, 11)
-
- Map.setCenter(longitude, latitude, zoom)
-
- start = st.date_input("Start Date for Dynamic World", datetime.date(2020, 1, 1))
- end = st.date_input("End Date for Dynamic World", datetime.date(2021, 1, 1))
-
- start_date = start.strftime("%Y-%m-%d")
- end_date = end.strftime("%Y-%m-%d")
-
- region = ee.Geometry.BBox(-179, -89, 179, 89)
- dw = geemap.dynamic_world(region, start_date, end_date, return_type="hillshade")
-
- layers = {
- "Dynamic World": geemap.ee_tile_layer(dw, {}, "Dynamic World Land Cover"),
- "ESA Land Cover": geemap.ee_tile_layer(esa, esa_vis, "ESA Land Cover"),
- "ESRI Land Cover": geemap.ee_tile_layer(esri, esri_vis, "ESRI Land Cover"),
- }
-
- options = list(layers.keys())
- left = st.selectbox("Select a left layer", options, index=1)
- right = st.selectbox("Select a right layer", options, index=0)
-
- left_layer = layers[left]
- right_layer = layers[right]
-
- Map.split_map(left_layer, right_layer)
-
- legend = st.selectbox("Select a legend", options, index=options.index(right))
- if legend == "Dynamic World":
- Map.add_legend(
- title="Dynamic World Land Cover",
- builtin_legend="Dynamic_World",
- )
- elif legend == "ESA Land Cover":
- Map.add_legend(title="ESA Land Cover", builtin_legend="ESA_WorldCover")
- elif legend == "ESRI Land Cover":
- Map.add_legend(title="ESRI Land Cover", builtin_legend="ESRI_LandCover")
-
- with st.expander("Data sources"):
- st.markdown(markdown)
-
-
-with col1:
- Map.to_streamlit(height=750)
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Dmc Devil May Cry 5 Crack Only Download !!EXCLUSIVE!!.md b/spaces/gotiQspiryo/whisper-ui/examples/Dmc Devil May Cry 5 Crack Only Download !!EXCLUSIVE!!.md
deleted file mode 100644
index 261e5fea2f7ac8e6dd0157b9cbe84997b7b20cac..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/Dmc Devil May Cry 5 Crack Only Download !!EXCLUSIVE!!.md
+++ /dev/null
@@ -1,28 +0,0 @@
-
-How to Download and Install DMC Devil May Cry 5 Crack Only
-If you are looking for a way to play DMC Devil May Cry 5 for free, you might be interested in downloading and installing the crack only version of the game. This is a modified version of the game that bypasses the DRM protection and allows you to play without a valid license key. However, before you proceed, you should be aware of the risks and disadvantages of using a cracked game.
-dmc devil may cry 5 crack only download Download ✪✪✪ https://urlgoal.com/2uyMeJ
-What is DMC Devil May Cry 5?
-DMC Devil May Cry 5 is the latest installment in the popular action-adventure hack and slash series developed by Capcom. The game follows the story of Dante, a young demon hunter who must face his own demonic heritage and fight against the forces of evil. The game features a fast-paced and stylish combat system, a variety of weapons and abilities, and a rich and immersive world. The game was released on March 8, 2019 for Windows, PlayStation 4, and Xbox One.
-What is DMC Devil May Cry 5 Crack Only?
-DMC Devil May Cry 5 Crack Only is a modified version of the game that removes the DRM protection and allows you to play without a valid license key. The crack only version does not include the full game files, but only the executable file and some other files that are needed to run the game. The crack only version is usually distributed by groups of hackers or pirates who want to share the game for free.
-How to Download and Install DMC Devil May Cry 5 Crack Only?
-To download and install DMC Devil May Cry 5 Crack Only, you will need to follow these steps:
-
-Download the full game files from a trusted source. You can either buy the game from an official platform like Steam or find a torrent link from a reputable site. Make sure you have enough space on your hard drive and a good internet connection.
-Extract the game files using a program like WinRAR or 7-Zip. You will get a folder with the game files inside.
-Download the crack only files from a reliable source. You can either find them on a website that offers cracks or use a torrent link from a verified uploader. Be careful not to download any malware or viruses along with the crack files.
-Extract the crack only files using a program like WinRAR or 7-Zip. You will get another folder with the crack files inside.
-Copy the crack files to the game installation folder. You will need to overwrite some of the original files with the cracked ones. The game installation folder is usually located in C:\Program Files (x86)\Steam\steamapps\common\DMC Devil May Cry 5 or C:\Program Files\DMC Devil May Cry 5 depending on your system.
-Run the game as administrator. You should be able to play DMC Devil May Cry 5 without any problems.
-
-What are the Risks and Disadvantages of Using DMC Devil May Cry 5 Crack Only?
-While using DMC Devil May Cry 5 Crack Only might seem tempting, you should be aware of the risks and disadvantages of doing so. Some of them are:
-
-
-You might be breaking the law. Depending on your country's laws, downloading and installing cracked games might be considered illegal and punishable by fines or jail time. You might also be infringing on the intellectual property rights of the developers and publishers of the game.
-You might be exposing your computer to malware or viruses. Some of the crack files might contain malicious code that can harm your system or steal your personal information. You might also be downloading unwanted programs or ads along with the crack files.
-You might be missing out on updates and features. The crack only version might not work with future updates or patches of the game. You might also be unable to access some of the features or content of the game, such as online multiplayer, DLCs, achievements, etc.
-You might be compromising your gaming experience. The crack only version might have bugs or errors that can affect your gameplay. You might also d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/gotiQspiryo/whisper-ui/examples/HD Online Player (Hum Dil De Chuke Sanam Love Full Mov).md b/spaces/gotiQspiryo/whisper-ui/examples/HD Online Player (Hum Dil De Chuke Sanam Love Full Mov).md
deleted file mode 100644
index 91a0ca96bcf7d901171dc234b8aee8faef005a05..0000000000000000000000000000000000000000
--- a/spaces/gotiQspiryo/whisper-ui/examples/HD Online Player (Hum Dil De Chuke Sanam Love Full Mov).md
+++ /dev/null
@@ -1,19 +0,0 @@
-
-How to Watch Hum Dil De Chuke Sanam Online in HD Quality
-Hum Dil De Chuke Sanam is a 1999 Hindi romantic drama movie directed by Sanjay Leela Bhansali and starring Salman Khan, Aishwarya Rai Bachchan and Ajay Devgn. The movie tells the story of Nandini, a young woman who falls in love with Sameer, a musician who comes to learn from her father. However, her father arranges her marriage with Vanraj, a lawyer who loves her unconditionally. When Vanraj discovers Nandini's past, he decides to take her to Italy to reunite her with Sameer.
-HD Online Player (Hum Dil De Chuke Sanam love full mov) Download File ››› https://urlgoal.com/2uyMc7
-If you are a fan of this movie and want to watch it online in HD quality, you have several options to choose from. Here are some of the best ways to stream Hum Dil De Chuke Sanam online:
-
-Eros Now : Eros Now is a popular streaming platform that offers a wide range of Indian movies and shows. You can watch Hum Dil De Chuke Sanam on Eros Now with a premium subscription that costs $4.99 per month or $49.99 per year. You can also enjoy subtitles in Arabic and English with the premium plan.
-JioCinema : JioCinema is another streaming service that features a large collection of Hindi movies and shows. You can watch Hum Dil De Chuke Sanam on JioCinema for free if you are a Jio subscriber. You can also download the movie for offline viewing on your device.
-JustWatch : JustWatch is a website that helps you find where to watch movies and shows online. You can use JustWatch to search for Hum Dil De Chuke Sanam and see which streaming platforms offer it in your region. You can also compare prices and quality of different services.
-fmovie : fmovie is a website that allows you to watch movies online for free without registration. You can watch Hum Dil De Chuke Sanam on fmovie in HD quality with English subtitles. However, be careful of pop-up ads and malware that may affect your device.
-
-Hum Dil De Chuke Sanam is a classic movie that showcases the beauty of love, music and culture. Whether you want to watch it for the first time or relive the emotions, you can find it online in HD quality with these options.
-
-Hum Dil De Chuke Sanam is not only a love story, but also a celebration of Indian culture, music and art. The movie showcases the rich traditions of Rajasthan and Gujarat, with colorful costumes, jewelry, dances and festivals. The movie also features some of the most melodious and soulful songs composed by Ismail Darbar and sung by renowned singers like Udit Narayan, Kavita Krishnamurthy, Alka Yagnik and Kumar Sanu. The songs blend classical and folk elements with modern beats and instruments.
-
-The movie received critical acclaim and commercial success upon its release. It won several awards, including four National Film Awards and nine Filmfare Awards. It was also India's official entry for the Academy Award for Best Foreign Language Film, but was not nominated. The movie was praised for its direction, cinematography, editing, music and performances. The chemistry between Salman Khan and Aishwarya Rai Bachchan was especially appreciated, as they portrayed the passion and pain of their characters with conviction. Ajay Devgn also impressed with his subtle and mature portrayal of Vanraj, who emerges as the true hero of the movie.
-Hum Dil De Chuke Sanam is a movie that will touch your heart and make you cry, laugh and smile. It is a movie that explores the meaning of love, loyalty and sacrifice. It is a movie that will stay with you long after it ends.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/gradio/leaderboard/README.md b/spaces/gradio/leaderboard/README.md
deleted file mode 100644
index ae0478b5e78fc600cdbfb9de6144df333bb260d9..0000000000000000000000000000000000000000
--- a/spaces/gradio/leaderboard/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
-
----
-title: leaderboard
-emoji: 🔥
-colorFrom: indigo
-colorTo: indigo
-sdk: gradio
-sdk_version: 4.1.2
-app_file: run.py
-pinned: false
-hf_oauth: true
----
diff --git a/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Chat/ChatMessage.tsx b/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Chat/ChatMessage.tsx
deleted file mode 100644
index b7d7abdd7d283c5c042a89cf0b23f6f11702a3f2..0000000000000000000000000000000000000000
--- a/spaces/gsaivinay/Llama-2-13B-GGML-UI/components/Chat/ChatMessage.tsx
+++ /dev/null
@@ -1,288 +0,0 @@
-import {
- IconCheck,
- IconCopy,
- IconEdit,
- IconRobot,
- IconTrash,
- IconUser,
-} from '@tabler/icons-react';
-import { FC, memo, useContext, useEffect, useRef, useState } from 'react';
-
-import { useTranslation } from 'next-i18next';
-
-import { updateConversation } from '@/utils/app/conversation';
-
-import { Message } from '@/types/chat';
-
-import HomeContext from '@/pages/api/home/home.context';
-
-import { CodeBlock } from '../Markdown/CodeBlock';
-import { MemoizedReactMarkdown } from '../Markdown/MemoizedReactMarkdown';
-
-import rehypeMathjax from 'rehype-mathjax';
-import remarkGfm from 'remark-gfm';
-import remarkMath from 'remark-math';
-
-export interface Props {
- message: Message;
- messageIndex: number;
- onEdit?: (editedMessage: Message) => void
-}
-
-export const ChatMessage: FC = memo(({ message, messageIndex, onEdit }) => {
- const { t } = useTranslation('chat');
-
- const {
- state: { selectedConversation, conversations, currentMessage, messageIsStreaming },
- dispatch: homeDispatch,
- } = useContext(HomeContext);
-
- const [isEditing, setIsEditing] = useState(false);
- const [isTyping, setIsTyping] = useState(false);
- const [messageContent, setMessageContent] = useState(message.content);
- const [messagedCopied, setMessageCopied] = useState(false);
-
- const textareaRef = useRef(null);
-
- const toggleEditing = () => {
- setIsEditing(!isEditing);
- };
-
- const handleInputChange = (event: React.ChangeEvent) => {
- setMessageContent(event.target.value);
- if (textareaRef.current) {
- textareaRef.current.style.height = 'inherit';
- textareaRef.current.style.height = `${textareaRef.current.scrollHeight}px`;
- }
- };
-
- const handleEditMessage = () => {
- if (message.content != messageContent) {
- if (selectedConversation && onEdit) {
- onEdit({ ...message, content: messageContent });
- }
- }
- setIsEditing(false);
- };
-
- const handleDeleteMessage = () => {
- if (!selectedConversation) return;
-
- const { messages } = selectedConversation;
- const findIndex = messages.findIndex((elm) => elm === message);
-
- if (findIndex < 0) return;
-
- if (
- findIndex < messages.length - 1 &&
- messages[findIndex + 1].role === 'assistant'
- ) {
- messages.splice(findIndex, 2);
- } else {
- messages.splice(findIndex, 1);
- }
- const updatedConversation = {
- ...selectedConversation,
- messages,
- };
-
- const { single, all } = updateConversation(
- updatedConversation,
- conversations,
- );
- homeDispatch({ field: 'selectedConversation', value: single });
- homeDispatch({ field: 'conversations', value: all });
- };
-
- const handlePressEnter = (e: React.KeyboardEvent) => {
- if (e.key === 'Enter' && !isTyping && !e.shiftKey) {
- e.preventDefault();
- handleEditMessage();
- }
- };
-
- const copyOnClick = () => {
- if (!navigator.clipboard) return;
-
- navigator.clipboard.writeText(message.content).then(() => {
- setMessageCopied(true);
- setTimeout(() => {
- setMessageCopied(false);
- }, 2000);
- });
- };
-
- useEffect(() => {
- setMessageContent(message.content);
- }, [message.content]);
-
-
- useEffect(() => {
- if (textareaRef.current) {
- textareaRef.current.style.height = 'inherit';
- textareaRef.current.style.height = `${textareaRef.current.scrollHeight}px`;
- }
- }, [isEditing]);
-
- return (
-
-
-
- {message.role === 'assistant' ? (
-
- ) : (
-
- )}
-
-
-
- {message.role === 'user' ? (
-
- {isEditing ? (
-
- ) : (
-
- {message.content}
-
- )}
-
- {!isEditing && (
-
-
-
-
-
-
-
-
- )}
-
- ) : (
-
-
▍
- }
- children[0] = (children[0] as string).replace("`▍`", "▍")
- }
- const match = /language-(\w+)/.exec(className || '');
- return !inline ? (
-
- ) : (
-
- {children}
-
- );
- },
- table({ children }) {
- return (
-
- );
- },
- th({ children }) {
- return (
-
- {children}
-
- );
- },
- td({ children }) {
- return (
-
- {children}
-
- );
- },
- }}
- >
- {`${message.content}${
- messageIsStreaming && messageIndex == (selectedConversation?.messages.length ?? 0) - 1 ? '`▍`' : ''
- }`}
-
-
-
- {messagedCopied ? (
-
- ) : (
-
-
-
- )}
-
-
- )}
-
-
-
- );
-});
-ChatMessage.displayName = 'ChatMessage';
diff --git a/spaces/gsharma/url-summarizer/app.py b/spaces/gsharma/url-summarizer/app.py
deleted file mode 100644
index c00d99a3307c0a78672b2365daa6582bf852021d..0000000000000000000000000000000000000000
--- a/spaces/gsharma/url-summarizer/app.py
+++ /dev/null
@@ -1,17 +0,0 @@
-import gradio as gr
-from transformers import pipeline
-import requests
-
-def summarize_url(url):
- r = requests.get(url)
- url_contents = r.text
- summarizer = pipeline("summarization",
- model="facebook/bart-large-cnn",
- max_length=1024, # or 512, or whatever your cut-off is
- truncation=True
-)
- summarized_text = summarizer(url_contents, min_length=30, do_sample=False)
- return summarized_text[0]['summary_text']
-
-iface = gr.Interface(fn=summarize_url, inputs="text", outputs="text")
-iface.launch()
\ No newline at end of file
diff --git a/spaces/gulabpatel/GFP_GAN/gfpgan/utils.py b/spaces/gulabpatel/GFP_GAN/gfpgan/utils.py
deleted file mode 100644
index f3e163e9e21a2e56d7dce404cfd2b21bcc61402f..0000000000000000000000000000000000000000
--- a/spaces/gulabpatel/GFP_GAN/gfpgan/utils.py
+++ /dev/null
@@ -1,130 +0,0 @@
-import cv2
-import os
-import torch
-from basicsr.utils import img2tensor, tensor2img
-from basicsr.utils.download_util import load_file_from_url
-from facexlib.utils.face_restoration_helper import FaceRestoreHelper
-from torchvision.transforms.functional import normalize
-
-from gfpgan.archs.gfpganv1_arch import GFPGANv1
-from gfpgan.archs.gfpganv1_clean_arch import GFPGANv1Clean
-
-ROOT_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
-
-
-class GFPGANer():
- """Helper for restoration with GFPGAN.
-
- It will detect and crop faces, and then resize the faces to 512x512.
- GFPGAN is used to restored the resized faces.
- The background is upsampled with the bg_upsampler.
- Finally, the faces will be pasted back to the upsample background image.
-
- Args:
- model_path (str): The path to the GFPGAN model. It can be urls (will first download it automatically).
- upscale (float): The upscale of the final output. Default: 2.
- arch (str): The GFPGAN architecture. Option: clean | original. Default: clean.
- channel_multiplier (int): Channel multiplier for large networks of StyleGAN2. Default: 2.
- bg_upsampler (nn.Module): The upsampler for the background. Default: None.
- """
-
- def __init__(self, model_path, upscale=2, arch='clean', channel_multiplier=2, bg_upsampler=None):
- self.upscale = upscale
- self.bg_upsampler = bg_upsampler
-
- # initialize model
- self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
- # initialize the GFP-GAN
- if arch == 'clean':
- self.gfpgan = GFPGANv1Clean(
- out_size=512,
- num_style_feat=512,
- channel_multiplier=channel_multiplier,
- decoder_load_path=None,
- fix_decoder=False,
- num_mlp=8,
- input_is_latent=True,
- different_w=True,
- narrow=1,
- sft_half=True)
- else:
- self.gfpgan = GFPGANv1(
- out_size=512,
- num_style_feat=512,
- channel_multiplier=channel_multiplier,
- decoder_load_path=None,
- fix_decoder=True,
- num_mlp=8,
- input_is_latent=True,
- different_w=True,
- narrow=1,
- sft_half=True)
- # initialize face helper
- self.face_helper = FaceRestoreHelper(
- upscale,
- face_size=512,
- crop_ratio=(1, 1),
- det_model='retinaface_resnet50',
- save_ext='png',
- device=self.device)
-
- if model_path.startswith('https://'):
- model_path = load_file_from_url(
- url=model_path, model_dir=os.path.join(ROOT_DIR, 'gfpgan/weights'), progress=True, file_name=None)
- loadnet = torch.load(model_path)
- if 'params_ema' in loadnet:
- keyname = 'params_ema'
- else:
- keyname = 'params'
- self.gfpgan.load_state_dict(loadnet[keyname], strict=True)
- self.gfpgan.eval()
- self.gfpgan = self.gfpgan.to(self.device)
-
- @torch.no_grad()
- def enhance(self, img, has_aligned=False, only_center_face=False, paste_back=True):
- self.face_helper.clean_all()
-
- if has_aligned: # the inputs are already aligned
- img = cv2.resize(img, (512, 512))
- self.face_helper.cropped_faces = [img]
- else:
- self.face_helper.read_image(img)
- # get face landmarks for each face
- self.face_helper.get_face_landmarks_5(only_center_face=only_center_face, eye_dist_threshold=5)
- # eye_dist_threshold=5: skip faces whose eye distance is smaller than 5 pixels
- # TODO: even with eye_dist_threshold, it will still introduce wrong detections and restorations.
- # align and warp each face
- self.face_helper.align_warp_face()
-
- # face restoration
- for cropped_face in self.face_helper.cropped_faces:
- # prepare data
- cropped_face_t = img2tensor(cropped_face / 255., bgr2rgb=True, float32=True)
- normalize(cropped_face_t, (0.5, 0.5, 0.5), (0.5, 0.5, 0.5), inplace=True)
- cropped_face_t = cropped_face_t.unsqueeze(0).to(self.device)
-
- try:
- output = self.gfpgan(cropped_face_t, return_rgb=False)[0]
- # convert to image
- restored_face = tensor2img(output.squeeze(0), rgb2bgr=True, min_max=(-1, 1))
- except RuntimeError as error:
- print(f'\tFailed inference for GFPGAN: {error}.')
- restored_face = cropped_face
-
- restored_face = restored_face.astype('uint8')
- self.face_helper.add_restored_face(restored_face)
-
- if not has_aligned and paste_back:
- # upsample the background
- if self.bg_upsampler is not None:
- # Now only support RealESRGAN for upsampling background
- bg_img = self.bg_upsampler.enhance(img, outscale=self.upscale)[0]
- else:
- bg_img = None
-
- self.face_helper.get_inverse_affine(None)
- # paste each restored face to the input image
- restored_img = self.face_helper.paste_faces_to_input_image(upsample_img=bg_img)
- return self.face_helper.cropped_faces, self.face_helper.restored_faces, restored_img
- else:
- return self.face_helper.cropped_faces, self.face_helper.restored_faces, None
diff --git a/spaces/gwang-kim/DATID-3D/eg3d/viz/render_depth_sample_widget.py b/spaces/gwang-kim/DATID-3D/eg3d/viz/render_depth_sample_widget.py
deleted file mode 100644
index 27c48f748e23d465c6200687c8280541df2f28b9..0000000000000000000000000000000000000000
--- a/spaces/gwang-kim/DATID-3D/eg3d/viz/render_depth_sample_widget.py
+++ /dev/null
@@ -1,40 +0,0 @@
-# SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-# SPDX-License-Identifier: LicenseRef-NvidiaProprietary
-#
-# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
-# property and proprietary rights in and to this material, related
-# documentation and any modifications thereto. Any use, reproduction,
-# disclosure or distribution of this material and related documentation
-# without an express license agreement from NVIDIA CORPORATION or
-# its affiliates is strictly prohibited.
-
-import imgui
-from gui_utils import imgui_utils
-
-#----------------------------------------------------------------------------
-
-class RenderDepthSampleWidget:
- def __init__(self, viz):
- self.viz = viz
- self.depth_mult = 2
- self.depth_importance_mult = 2
- self.render_types = [.5, 1, 2, 4]
- self.labels = ['0.5x', '1x', '2x', '4x']
-
- @imgui_utils.scoped_by_object_id
- def __call__(self, show=True):
- viz = self.viz
-
- if show:
- imgui.text('Render Type')
- imgui.same_line(viz.label_w)
- with imgui_utils.item_width(viz.font_size * 4):
- _clicked, self.depth_mult = imgui.combo('Depth Sample Multiplier', self.depth_mult, self.labels)
- imgui.same_line(viz.label_w + viz.font_size * 16 + viz.spacing * 2)
- with imgui_utils.item_width(viz.font_size * 4):
- _clicked, self.depth_importance_mult = imgui.combo('Depth Sample Importance Multiplier', self.depth_importance_mult, self.labels)
-
- viz.args.depth_mult = self.render_types[self.depth_mult]
- viz.args.depth_importance_mult = self.render_types[self.depth_importance_mult]
-
-#----------------------------------------------------------------------------
diff --git a/spaces/gwang-kim/DATID-3D/pose_estimation/models/arcface_torch/configs/ms1mv3_mbf.py b/spaces/gwang-kim/DATID-3D/pose_estimation/models/arcface_torch/configs/ms1mv3_mbf.py
deleted file mode 100644
index b8a00d6305eeda5a94788017afc1cda0d4a4cd2a..0000000000000000000000000000000000000000
--- a/spaces/gwang-kim/DATID-3D/pose_estimation/models/arcface_torch/configs/ms1mv3_mbf.py
+++ /dev/null
@@ -1,26 +0,0 @@
-from easydict import EasyDict as edict
-
-# make training faster
-# our RAM is 256G
-# mount -t tmpfs -o size=140G tmpfs /train_tmp
-
-config = edict()
-config.loss = "arcface"
-config.network = "mbf"
-config.resume = False
-config.output = None
-config.embedding_size = 512
-config.sample_rate = 1.0
-config.fp16 = True
-config.momentum = 0.9
-config.weight_decay = 2e-4
-config.batch_size = 128
-config.lr = 0.1 # batch size is 512
-
-config.rec = "/train_tmp/ms1m-retinaface-t1"
-config.num_classes = 93431
-config.num_image = 5179510
-config.num_epoch = 30
-config.warmup_epoch = -1
-config.decay_epoch = [10, 20, 25]
-config.val_targets = ["lfw", "cfp_fp", "agedb_30"]
diff --git a/spaces/gwang-kim/DATID-3D/pose_estimation/models/arcface_torch/utils/__init__.py b/spaces/gwang-kim/DATID-3D/pose_estimation/models/arcface_torch/utils/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/h2oai/wave-tour/examples/table_column_alignment.py b/spaces/h2oai/wave-tour/examples/table_column_alignment.py
deleted file mode 100644
index 1198d52dab765149a23ead803efdb978c26188ae..0000000000000000000000000000000000000000
--- a/spaces/h2oai/wave-tour/examples/table_column_alignment.py
+++ /dev/null
@@ -1,58 +0,0 @@
-# Table / Column Alignment
-# Allow table values to be aligned per column as left (default), right or center
-# #table
-# ---
-from faker import Faker
-
-from h2o_wave import main, app, Q, ui
-
-fake = Faker()
-
-_id = 0
-
-# Create some names
-class User:
- def __init__(self, first_name: str, last_name: str, username: str, company: str):
- global _id
- _id += 1
- self.id = f'I{_id}'
- self.first_name = first_name
- self.last_name = last_name
- self.username = username
- self.company = company
-
-users = [
- User(
- first_name=fake.first_name(),
- last_name=fake.last_name(),
- username=fake.user_name(),
- company=fake.company()
- ) for i in range (100)
-]
-
-# Create columns for our issue table.
-columns = [
- ui.table_column(name='first_name', label='First Name', align='center'),
- ui.table_column(name='last_name', label='Last Name', align='right'),
- ui.table_column(name='username', label='Username', align='left'),
- ui.table_column(name='company', label='Company'),
-]
-
-
-@app('/demo')
-async def serve(q: Q):
- q.page['form'] = ui.form_card(box='1 1 -1 10', items=[
- ui.table(
- name='users',
- columns=columns,
- rows=[ui.table_row(
- name=user.id,
- cells=[user.first_name, user.last_name, user.username, user.company]
- ) for user in users],
- downloadable=True,
- resettable=True,
- height='800px'
- )
- ])
-
- await q.page.save()
diff --git a/spaces/h2oai/wave-tour/examples/table_pagination_wavedb.py b/spaces/h2oai/wave-tour/examples/table_pagination_wavedb.py
deleted file mode 100644
index 5416a1099ceb9385f9d53cc065307a8739f83875..0000000000000000000000000000000000000000
--- a/spaces/h2oai/wave-tour/examples/table_pagination_wavedb.py
+++ /dev/null
@@ -1,153 +0,0 @@
-import os
-from typing import List
-from h2o_wave import main, app, Q, ui, connect
-import csv
-
-
-rows_per_page = 10
-
-async def get_rows(q: Q, table = 'issues', columns = ['text', 'status'], count_only = False) -> List:
- sql_query = f'SELECT {"count(*)" if count_only else ", ".join(columns)} FROM {table}'
-
- substitution_values = []
- # Filter out all rows that do not contain searched string.
- if q.client.search:
- search_val = q.client.search['value'].lower()
- cols = q.client.search['cols']
- like_statements = []
- for col in cols:
- if col in columns:
- substitution_values.append('%' + search_val + '%')
- like_statements.append(f'{col} LIKE ?')
-
- sql_query += ' WHERE (' + ' OR '.join(like_statements) + ')'
-
- # Filter out rows that do not contain filtered column value.
- if q.client.filters:
- filter_queries = []
- for col, filters in q.client.filters.items():
- if col in columns:
- like_statements = []
- for f in filters:
- substitution_values.append(f'%{f}%')
- like_statements.append(f'{col} LIKE ?')
- if like_statements:
- filter_queries.append(' OR '.join(like_statements))
- if filter_queries:
- sql_query += ' AND ' if 'WHERE' in sql_query else ' WHERE '
- sql_query += ' AND '.join(filter_queries)
-
- # Sort by multiple columns.
- if q.client.sort:
- # NOTE: This example sorts alphabetically since only "text" col is sortable.
- sort_statements = []
- for col, asc in q.client.sort.items():
- if col in columns:
- sort_statements.append(f'{col} {"ASC" if asc else "DESC" }')
- if sort_statements:
- sql_query += ' ORDER BY ' + ', '.join(sort_statements)
-
- if not count_only:
- sql_query += f' LIMIT {rows_per_page} OFFSET {q.client.page_offset or 0} '
-
- results, err = await q.app.db.exec(sql_query, *substitution_values)
- if err:
- raise RuntimeError(f'Failed querying the table data: {err}')
-
- return results
-
-
-# NOTICE: You need a running instance of https://wave.h2o.ai/docs/wavedb for this app to run.
-@app('/demo')
-async def serve(q: Q):
- # Run once per app lifetime.
- if not q.app.initialized:
- # Create a database connection.
- connection = connect()
- q.app.db = connection['demo_db']
- # Check if there is any data in the database.
- _, err = await q.app.db.exec('CREATE TABLE IF NOT EXISTS issues (text TEXT, status TEXT)')
- if err:
- raise RuntimeError(f'Failed setting up database: {err}')
- results, err = await q.app.db.exec('SELECT COUNT(*) FROM issues')
- if err:
- raise RuntimeError(f'Failed querying the database: {err}')
- # Populate DB data if necessary.
- if results and results[0] and results[0][0] != 100:
- insert_statements = []
- for i in range (1,101):
- insert_statements.append(f'INSERT INTO issues (text, status) VALUES ("Text {i}", "{"Closed" if i % 2 == 0 else "Open"} ")')
- _, err = await q.app.db.exec_many(*insert_statements)
- if err:
- raise RuntimeError(f'Failed querying the database: {err}')
- q.app.initialized = True
-
- # Run once per browser tab lifetime.
- if not q.client.initialized:
- q.page['meta'] = ui.meta_card(box='')
- total_rows, err = await q.app.db.exec('SELECT COUNT(*) FROM issues')
- if err:
- raise RuntimeError(f'Failed querying the database: {err}')
- rows = await get_rows(q)
- q.page['form'] = ui.form_card(box='1 1 -1 -1', items=[
- ui.table(
- name='table',
- columns=[
- ui.table_column(name='text', label='Text', sortable=True, searchable=True, link=False),
- ui.table_column(name='status', label='Status', filterable=True, filters=['Open', 'Closed']),
- ],
- rows=[ui.table_row(r[0], [r[0], r[1]]) for r in rows],
- resettable=True,
- downloadable=True,
- pagination=ui.table_pagination(total_rows=total_rows[0][0], rows_per_page=rows_per_page),
- # Make sure to register the necessary events for the feature you want to support, e.g. sorting.
- # All the registered events have to be handled by the developer.
- # `page_change` event is required to be handled for pagination to work.
- events=['sort', 'filter', 'search', 'page_change', 'download', 'reset']
- )
- ])
- q.client.initialized = True
-
- # Check if user triggered any table action and save it to local state for allowing multiple
- # actions to be performed on the data at the same time, e.g. sort the filtered data etc.
- if q.events.table:
- table = q.page['form'].table
- if q.events.table.sort:
- q.client.sort = q.events.table.sort
- q.client.page_offset = 0
- if q.events.table.filter:
- q.client.filters = q.events.table.filter
- q.client.page_offset = 0
- if q.events.table.search is not None:
- q.client.search = q.events.table.search
- q.client.page_offset = 0
- if q.events.table.page_change:
- q.client.page_offset = q.events.table.page_change.get('offset', 0)
- if q.events.table.reset:
- q.client.search = None
- q.client.sort = None
- q.client.filters = None
- q.client.page_offset = 0
- total_filtered_rows = await get_rows(q, count_only=True)
- table.pagination = ui.table_pagination(total_filtered_rows[0][0], rows_per_page)
-
- rows = await get_rows(q)
- table.rows = [ui.table_row(r[0], [r[0], r[1]]) for r in rows]
-
- # Update table pagination according to the new row count.
- if q.client.search is not None or q.client.filters:
- total_filtered_rows = await get_rows(q, count_only=True)
- table.pagination = ui.table_pagination(total_filtered_rows[0][0], rows_per_page)
-
- if q.events.table.download:
- # For multi-user apps, the tmp file name should be unique for each user, not hardcoded.
- with open('data_download.csv', 'w') as csvfile:
- csv_writer = csv.writer(csvfile, delimiter=',')
- for r in rows:
- csv_writer.writerow([r.text, r.status])
- download_url, = await q.site.upload(['data_download.csv'])
- # Clean up the file after upload.
- os.remove('data_download.csv')
- q.page['meta'].script = ui.inline_script(f'window.open("{download_url}")')
-
- await q.page.save()
diff --git a/spaces/hekbobo/bingo/src/lib/utils.ts b/spaces/hekbobo/bingo/src/lib/utils.ts
deleted file mode 100644
index 8de2eba94bf0bc93579d4f489e8b810dbf6ce92a..0000000000000000000000000000000000000000
--- a/spaces/hekbobo/bingo/src/lib/utils.ts
+++ /dev/null
@@ -1,159 +0,0 @@
-import { clsx, type ClassValue } from 'clsx'
-import { customAlphabet } from 'nanoid'
-import { twMerge } from 'tailwind-merge'
-// @ts-ignore
-import randomip from 'random-ip'
-import cidr from './cidr.json'
-
-export function cn(...inputs: ClassValue[]) {
- return twMerge(clsx(inputs))
-}
-
-export const nanoid = customAlphabet(
- '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz',
- 7
-) // 7-character random string
-
-export function createChunkDecoder() {
- const decoder = new TextDecoder()
- return function (chunk: Uint8Array | undefined): string {
- if (!chunk) return ''
- return decoder.decode(chunk, { stream: true })
- }
-}
-
-export function random (start: number, end: number) {
- return start + Math.floor(Math.random() * (end - start))
-}
-
-export function randomIP() {
- // return `104.${random(0, 21)}.${random(0, 127)}.${random(1, 255)}`
- const [ip, range] = cidr.at(random(0, cidr.length))?.split('/')!
- return randomip(ip, range)
-}
-
-export const defaultUID = 'xxx'
-
-export function parseHeadersFromCurl(content: string) {
- const re = /-H '([^:]+):\s*([^']+)/mg
- const headers: HeadersInit = {}
- content = content.replaceAll('-H "', '-H \'').replaceAll('" ^', '\'\\').replaceAll('^\\^"', '"') // 将 cmd curl 转成 bash curl
- content.replace(re, (_: string, key: string, value: string) => {
- headers[key] = value
- return ''
- })
- return headers
-}
-
-export const ChunkKeys = ['BING_HEADER', 'BING_HEADER1', 'BING_HEADER2']
-export function encodeHeadersToCookie(content: string) {
- const base64Content = btoa(content)
- const contentChunks = base64Content.match(/.{1,4000}/g) || []
- return ChunkKeys.map((key, index) => `${key}=${contentChunks[index] ?? ''}`)
-}
-
-export function extraCurlFromCookie(cookies: Partial<{ [key: string]: string }>) {
- let base64Content = ''
- ChunkKeys.forEach((key) => {
- base64Content += (cookies[key] || '')
- })
- try {
- return atob(base64Content)
- } catch(e) {
- return ''
- }
-}
-
-export function extraHeadersFromCookie(cookies: Partial<{ [key: string]: string }>) {
- return parseHeadersFromCurl(extraCurlFromCookie(cookies))
-}
-
-export function formatDate(input: string | number | Date): string {
- const date = new Date(input)
- return date.toLocaleDateString('en-US', {
- month: 'long',
- day: 'numeric',
- year: 'numeric'
- })
-}
-
-export function parseCookie(cookie: string, cookieName: string) {
- const targetCookie = new RegExp(`(?:[; ]|^)${cookieName}=([^;]*)`).test(cookie) ? RegExp.$1 : cookie
- return targetCookie ? decodeURIComponent(targetCookie).trim() : cookie.indexOf('=') === -1 ? cookie.trim() : ''
-}
-
-export function setCookie(key: string, value: string) {
- const maxAge = value ? 86400 * 30 : 0
- document.cookie = `${key}=${value || ''}; Path=/; Max-Age=${maxAge}; SameSite=None; Secure`
-}
-
-export function getCookie(cookieName: string) {
- const re = new RegExp(`(?:[; ]|^)${cookieName}=([^;]*)`)
- return re.test(document.cookie) ? RegExp.$1 : ''
-}
-
-export function parseCookies(cookie: string, cookieNames: string[]) {
- const cookies: { [key: string]: string } = {}
- cookieNames.forEach(cookieName => {
- cookies[cookieName] = parseCookie(cookie, cookieName)
- })
- return cookies
-}
-
-export const DEFAULT_UA = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36 Edg/115.0.0.0'
-
-export function parseUA(ua?: string, default_ua = DEFAULT_UA) {
- return / EDGE?/i.test(decodeURIComponent(ua || '')) ? decodeURIComponent(ua!.trim()) : default_ua
-}
-
-export function mockUser(cookies: Partial<{ [key: string]: string }>) {
- const {
- BING_UA = process.env.BING_UA,
- BING_IP,
- _U = defaultUID,
- } = cookies
- const ua = parseUA(BING_UA)
-
- return {
- 'x-forwarded-for': BING_IP!,
- 'Accept-Encoding': 'gzip, deflate, br',
- 'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6',
- 'User-Agent': ua!,
- 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.3 OS/Win32',
- cookie: `_U=${_U}` || '',
- }
-}
-
-export function createHeaders(cookies: Partial<{ [key: string]: string }>, type?: string) {
- let {
- BING_HEADER = process.env.BING_HEADER,
- BING_IP,
- IMAGE_ONLY = process.env.IMAGE_ONLY ?? '1',
- } = cookies
- const imageOnly = /^(1|true|yes)$/.test(String(IMAGE_ONLY))
- if (BING_HEADER) {
- if (
- (imageOnly && type === 'image')
- || !imageOnly
- ) {
- const headers = extraHeadersFromCookie({
- BING_HEADER,
- ...cookies,
- }) || {}
- headers['x-forward-for'] = BING_IP!
- return headers
- }
- }
- return mockUser(cookies)
-}
-
-export class WatchDog {
- private tid = 0
- watch(fn: Function, timeout = 2000) {
- clearTimeout(this.tid)
- this.tid = setTimeout(fn, timeout + Math.random() * 1000)
- }
- reset() {
- clearTimeout(this.tid)
- }
-}
diff --git a/spaces/hf4all/chatbot-ui-bing/README.md b/spaces/hf4all/chatbot-ui-bing/README.md
deleted file mode 100644
index 4911e50571638f1d87a2c9a9936be6f84100cd0a..0000000000000000000000000000000000000000
--- a/spaces/hf4all/chatbot-ui-bing/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Chatbot UI Bing
-emoji: 💻
-colorFrom: blue
-colorTo: yellow
-sdk: docker
-pinned: false
-license: mit
-app_port: 3000
-duplicated_from: dongsiqie/gpt
----
-免费key的来源:https://github.com/pengzhile/pandora/issues/837
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
\ No newline at end of file
diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/create_plots_new/compute_significance.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/create_plots_new/compute_significance.py
deleted file mode 100644
index 87a3c642046e4297654aa96a46e0947866b311fb..0000000000000000000000000000000000000000
--- a/spaces/ho11laqe/nnUNet_calvingfront_detection/create_plots_new/compute_significance.py
+++ /dev/null
@@ -1,94 +0,0 @@
-import numpy as np
-import pandas as pd
-import plotly.express as px
-import plotly.graph_objects as go
-import plotly.io as pio
-pio.kaleido.scope.mathjax = None
-import os
-import json
-
-
-if __name__ == '__main__':
-
- experiments = ['Task501_Glacier_front',
- 'Task502_Glacier_zone',
- 'Task503_Glacier_mtl_early',
- 'Task503_Glacier_mtl_late',
- 'Task505_Glacier_mtl_boundary',
- 'Task500_Glacier_zonefronts']
- data_dir = '/home/ho11laqe/Desktop/nnUNet_results/Final_Eval/'
-
- zone_mean = {}
- front_mean = {}
- for experiment in experiments:
- print(experiment)
- zone_mean_exp = []
- front_mean_exp = []
- # nofront[experiment] = {'Front': [], 'Zone': []}
- for fold in range(5):
- # load json file with results
- results_json_path = os.path.join(data_dir, experiment, 'fold_' + str(fold), 'pngs',
- 'eval_results.json')
- if not os.path.exists(results_json_path):
- results_json_path = os.path.join(data_dir, experiment, 'fold_' + str(fold), 'eval_results.json')
-
- with open(results_json_path, 'r') as f:
- result = json.load(f)
-
- if 'Front_Delineation' in result.keys():
-
- front_mean_exp.append(result['Front_Delineation']['Result_all']['mean'])
- else:
- front_mean_exp.append(0)
-
- if 'Zone_Delineation' in result.keys():
- zone_mean_exp.append(result['Zone_Delineation']['Result_all']['mean'])
- else:
- zone_mean_exp.append(0)
-
- print(np.mean(zone_mean_exp), np.std(zone_mean_exp))
- print(np.mean(front_mean_exp), np.std(front_mean_exp))
- zone_mean[experiment] = zone_mean_exp
- front_mean[experiment] = front_mean_exp
-
- for exp1 in experiments:
- for exp2 in experiments:
- # FRONT
- mean1 = np.mean(front_mean[exp1])
- var1 = np.var (front_mean[exp1])
- mean2 = np.mean(front_mean[exp2])
- var2 = np.var(front_mean[exp2])
-
- T_front = abs(mean1 - mean2) / np.sqrt((var1 / 5) + (var2 / 5))
- print(exp1 + '<>' +exp2)
- print('Tfront:'+ str(T_front))
-
- # Zone
- mean1 = np.mean(zone_mean[exp1])
- var1 = np.var(zone_mean[exp1])
- mean2 = np.mean(zone_mean[exp2])
- var2 = np.var(zone_mean[exp2])
-
- T_zone = abs(mean1 - mean2) / np.sqrt((var1 / 5) + (var2 / 5))
- print('Tzone:' + str(T_zone))
- print('')
- """
- box_width = 0.8
- fig = px.box(None, points="all", template="plotly_white", width=600, height=500)
-
- fig.add_trace(go.Box(y=zone_mean['Task502_Glacier_zone'], name='Zone STL', width=box_width,
- line_color='black', fillcolor='LightBlue ', pointpos=0, boxpoints='all', boxmean=True))
- fig.add_trace(go.Box(y=zone_mean['Task503_Glacier_mtl_early'], name='Early Zone MTL', width=box_width,
- line_color='black', fillcolor='YellowGreen', pointpos=0, boxpoints='all',
- boxmean=True, ))
- fig.add_trace(go.Box(y=zone_mean['Task503_Glacier_mtl_late'], name='Late Zone MTL', width=box_width,
- line_color='black', fillcolor='#e1e400', pointpos=0, boxpoints='all', boxmean=True))
- fig.add_trace(
- go.Box(y=zone_mean['Task505_Glacier_mtl_boundary'], name='Boundary Zone MTL', width=box_width,
- line_color='black', fillcolor='gold', pointpos=0, boxpoints='all', boxmean=True))
-
- fig.update_layout(showlegend=False, font=dict(family="Times New Roman", size=18))
- fig.update_yaxes(title='Front mean')
- # fig.show()
- fig.write_image('Front mean' + ".pdf", format='pdf')
- """
\ No newline at end of file
diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/optimizer_and_lr/nnUNetTrainerV2_SGD_ReduceOnPlateau.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/optimizer_and_lr/nnUNetTrainerV2_SGD_ReduceOnPlateau.py
deleted file mode 100644
index d89a7458776c93889d3ad9bd9f3e1fb5b9804a54..0000000000000000000000000000000000000000
--- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/optimizer_and_lr/nnUNetTrainerV2_SGD_ReduceOnPlateau.py
+++ /dev/null
@@ -1,50 +0,0 @@
-# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-import torch
-from nnunet.training.network_training.nnUNetTrainer import nnUNetTrainer
-from nnunet.training.network_training.nnUNetTrainerV2 import nnUNetTrainerV2
-from torch.optim import lr_scheduler
-
-
-class nnUNetTrainerV2_SGD_ReduceOnPlateau(nnUNetTrainerV2):
- def __init__(self, plans_file, fold, output_folder=None, dataset_directory=None, batch_dice=True, stage=None,
- unpack_data=True, deterministic=True, fp16=False):
- super().__init__(plans_file, fold, output_folder, dataset_directory, batch_dice, stage, unpack_data,
- deterministic, fp16)
-
- def initialize_optimizer_and_scheduler(self):
- self.optimizer = torch.optim.SGD(self.network.parameters(), self.initial_lr, weight_decay=self.weight_decay,
- momentum=0.99, nesterov=True)
- self.lr_scheduler = lr_scheduler.ReduceLROnPlateau(self.optimizer, mode='min', factor=0.2,
- patience=self.lr_scheduler_patience,
- verbose=True, threshold=self.lr_scheduler_eps,
- threshold_mode="abs")
-
- def maybe_update_lr(self, epoch=None):
- # maybe update learning rate
- if self.lr_scheduler is not None:
- assert isinstance(self.lr_scheduler, (lr_scheduler.ReduceLROnPlateau, lr_scheduler._LRScheduler))
-
- if isinstance(self.lr_scheduler, lr_scheduler.ReduceLROnPlateau):
- # lr scheduler is updated with moving average val loss. should be more robust
- if self.epoch > 0: # otherwise self.train_loss_MA is None
- self.lr_scheduler.step(self.train_loss_MA)
- else:
- self.lr_scheduler.step(self.epoch + 1)
- self.print_to_log_file("lr is now (scheduler) %s" % str(self.optimizer.param_groups[0]['lr']))
-
- def on_epoch_end(self):
- return nnUNetTrainer.on_epoch_end(self)
diff --git a/spaces/housexu123/bingo-2.0/src/components/ui/alert-dialog.tsx b/spaces/housexu123/bingo-2.0/src/components/ui/alert-dialog.tsx
deleted file mode 100644
index 17fec4d16510328deacc1416569173c97761ef72..0000000000000000000000000000000000000000
--- a/spaces/housexu123/bingo-2.0/src/components/ui/alert-dialog.tsx
+++ /dev/null
@@ -1,150 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import * as AlertDialogPrimitive from '@radix-ui/react-alert-dialog'
-
-import { cn } from '@/lib/utils'
-import { buttonVariants } from '@/components/ui/button'
-
-const AlertDialog = AlertDialogPrimitive.Root
-
-const AlertDialogTrigger = AlertDialogPrimitive.Trigger
-
-const AlertDialogPortal = ({
- className,
- children,
- ...props
-}: AlertDialogPrimitive.AlertDialogPortalProps) => (
-
-
- {children}
-
-
-)
-AlertDialogPortal.displayName = AlertDialogPrimitive.Portal.displayName
-
-const AlertDialogOverlay = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, children, ...props }, ref) => (
-
-))
-AlertDialogOverlay.displayName = AlertDialogPrimitive.Overlay.displayName
-
-const AlertDialogContent = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-
-
-
-))
-AlertDialogContent.displayName = AlertDialogPrimitive.Content.displayName
-
-const AlertDialogHeader = ({
- className,
- ...props
-}: React.HTMLAttributes) => (
-
-)
-AlertDialogHeader.displayName = 'AlertDialogHeader'
-
-const AlertDialogFooter = ({
- className,
- ...props
-}: React.HTMLAttributes) => (
-
-)
-AlertDialogFooter.displayName = 'AlertDialogFooter'
-
-const AlertDialogTitle = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-AlertDialogTitle.displayName = AlertDialogPrimitive.Title.displayName
-
-const AlertDialogDescription = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-AlertDialogDescription.displayName =
- AlertDialogPrimitive.Description.displayName
-
-const AlertDialogAction = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-AlertDialogAction.displayName = AlertDialogPrimitive.Action.displayName
-
-const AlertDialogCancel = React.forwardRef<
- React.ElementRef,
- React.ComponentPropsWithoutRef
->(({ className, ...props }, ref) => (
-
-))
-AlertDialogCancel.displayName = AlertDialogPrimitive.Cancel.displayName
-
-export {
- AlertDialog,
- AlertDialogTrigger,
- AlertDialogContent,
- AlertDialogHeader,
- AlertDialogFooter,
- AlertDialogTitle,
- AlertDialogDescription,
- AlertDialogAction,
- AlertDialogCancel
-}
diff --git a/spaces/hudsonhayes/Multi-Doc-Virtual-Chatbot/style.css b/spaces/hudsonhayes/Multi-Doc-Virtual-Chatbot/style.css
deleted file mode 100644
index 4e98709eb5da8685ebd57c846461df33b664d2e7..0000000000000000000000000000000000000000
--- a/spaces/hudsonhayes/Multi-Doc-Virtual-Chatbot/style.css
+++ /dev/null
@@ -1,47 +0,0 @@
-#col-container {
- max-width: 1000px;
- margin-left: auto;
- margin-right: auto;
-}
-.heightfit{
- height:120px;
-}
-/* gradio-app{
- background:url("file=bg.png") !important;
-} */
-gradio-app{
- background: rgb(153,0,255);
- background-image: radial-gradient(circle, rgba(153,0,255,1) 0%, rgba(9,15,121,1) 96%, rgba(2,0,36,1) 100%) !important;
- height: 100%;
- width: 100%;
-}
-
-#row-flex {
- display: flex;
- align-items: center;
- justify-content: center;
-}
-.leftimage .rightimage{
- float:left;
-}
-.leftimage{
- padding-top:27px;
- margin-left:210px;
-}
-.rightimage{
- margin-right:210px;
- margin-top:15px;
-}
-a,
-a:hover,
-a:visited {
- text-decoration-line: underline;
- font-weight: 600;
- color: #1f2937 !important;
-}
-
-.dark a,
-.dark a:hover,
-.dark a:visited {
- color: #f3f4f6 !important;
-}
diff --git a/spaces/huggingface-projects/wordalle/static/_app/immutable/chunks/fifty-f65036e1.js b/spaces/huggingface-projects/wordalle/static/_app/immutable/chunks/fifty-f65036e1.js
deleted file mode 100644
index b599f3852b8918bb36ccda444a086d8b77daf515..0000000000000000000000000000000000000000
--- a/spaces/huggingface-projects/wordalle/static/_app/immutable/chunks/fifty-f65036e1.js
+++ /dev/null
@@ -1 +0,0 @@
-import{S as j,i as z,s as I,N as l,O as r,a as h,d as a,b as t,f as q,g as K,J as s,E as J}from"./index-86f4d6c3.js";function P(N){let e,n,i,d,u,_,M,m,v,y,x,p,w,f,E,F,g,V,Z,U,A,D,o,H,k;return{c(){e=l("svg"),n=l("path"),i=l("mask"),d=l("path"),u=l("g"),_=l("path"),M=l("path"),m=l("path"),v=l("path"),y=l("path"),x=l("path"),p=l("mask"),w=l("path"),f=l("g"),E=l("path"),F=l("path"),g=l("path"),V=l("path"),Z=l("path"),U=l("path"),A=l("path"),D=l("defs"),o=l("linearGradient"),H=l("stop"),k=l("stop"),this.h()},l(L){e=r(L,"svg",{xmlns:!0,fill:!0,viewBox:!0,width:!0,height:!0,class:!0});var c=h(e);n=r(c,"path",{fill:!0,d:!0}),h(n).forEach(a),i=r(c,"mask",{id:!0,width:!0,height:!0,x:!0,y:!0,maskUnits:!0,style:!0});var G=h(i);d=r(G,"path",{fill:!0,d:!0}),h(d).forEach(a),G.forEach(a),u=r(c,"g",{mask:!0});var S=h(u);_=r(S,"path",{fill:!0,d:!0}),h(_).forEach(a),M=r(S,"path",{fill:!0,d:!0}),h(M).forEach(a),m=r(S,"path",{fill:!0,d:!0,opacity:!0}),h(m).forEach(a),v=r(S,"path",{fill:!0,d:!0,opacity:!0}),h(v).forEach(a),y=r(S,"path",{fill:!0,d:!0,opacity:!0}),h(y).forEach(a),S.forEach(a),x=r(c,"path",{fill:!0,d:!0}),h(x).forEach(a),p=r(c,"mask",{id:!0,width:!0,height:!0,x:!0,y:!0,maskUnits:!0,style:!0});var O=h(p);w=r(O,"path",{fill:!0,d:!0}),h(w).forEach(a),O.forEach(a),f=r(c,"g",{mask:!0});var C=h(f);E=r(C,"path",{fill:!0,d:!0,opacity:!0}),h(E).forEach(a),F=r(C,"path",{fill:!0,d:!0,opacity:!0}),h(F).forEach(a),g=r(C,"path",{fill:!0,d:!0,opacity:!0}),h(g).forEach(a),V=r(C,"path",{fill:!0,d:!0}),h(V).forEach(a),Z=r(C,"path",{fill:!0,d:!0,opacity:!0}),h(Z).forEach(a),C.forEach(a),U=r(c,"path",{fill:!0,d:!0}),h(U).forEach(a),A=r(c,"path",{fill:!0,d:!0}),h(A).forEach(a),D=r(c,"defs",{});var b=h(D);o=r(b,"linearGradient",{id:!0,x1:!0,x2:!0,y1:!0,y2:!0,gradientUnits:!0});var B=h(o);H=r(B,"stop",{"stop-color":!0}),h(H).forEach(a),k=r(B,"stop",{offset:!0,"stop-color":!0,"stop-opacity":!0}),h(k).forEach(a),B.forEach(a),b.forEach(a),c.forEach(a),this.h()},h(){t(n,"fill","#3F7B73"),t(n,"d","M372.7 107.8 201.9 7.6a25 25 0 0 0-25.7.2L12 107.7A25 25 0 0 0 0 129v194.2a25 25 0 0 0 12.2 21.5l164.2 97.7a25 25 0 0 0 25.3.2l170.7-98A25 25 0 0 0 385 323V129.3a25 25 0 0 0-12.3-21.5Z"),t(d,"fill","#D3720A"),t(d,"d","M372.7 107.8 201.9 7.6a25 25 0 0 0-25.7.2L12 107.7A25 25 0 0 0 0 129v194.2a25 25 0 0 0 12.2 21.5l164.2 97.7a25 25 0 0 0 25.3.2l170.7-98A25 25 0 0 0 385 323V129.3a25 25 0 0 0-12.3-21.5Z"),t(i,"id","a"),t(i,"width","385"),t(i,"height","443"),t(i,"x","0"),t(i,"y","4"),t(i,"maskUnits","userSpaceOnUse"),q(i,"mask-type","alpha"),t(_,"fill","#468A7D"),t(_,"d","M177.5 322c-25 59-49.7 120.7-138.5 83s-182.7-151-157.6-210c25.1-59.1 116.3-93.2 205.1-55.5s116 123.4 91 182.5Z"),t(M,"fill","#184F4F"),t(M,"d","M9 328.5c-9-17-15-206-15-206L-14.5 357 190 486l202-133V126.5s-7.5 187.5-15 202c-3.5 6.8-39.3 28.2-78 52-43.2 26.6-90.5 55.5-109 55.5-18.2 0-63-26.6-104-52.5-37.9-23.9-72.7-46.8-77-55Z"),t(m,"fill","#F5FFFF"),t(m,"d","M166 379h48c-9.3 31-9.3 47.8 0 77h-48c8.3-30 7.2-47 0-77Zm165-78.8 30-23.2c8.1 32.4 18.3 45.8 47.1 61l-30 23.2c-4.2-35-15.9-47.2-47.1-61Z"),t(m,"opacity",".3"),t(v,"fill","#C89435"),t(v,"d","M330 111.8 342.6 76c25.7 20.2 41.6 26 72.7 25.6l-12.7 35.8c-24.7-20.3-41-25-72.6-25.6Z"),t(v,"opacity",".3"),t(y,"fill","#F5FFFF"),t(y,"d","m22 273 29 24.7c-29.7 14.9-40.7 27.7-50 58.6l-29-24.7c30.4-13.8 40.9-27 50-58.6Z"),t(y,"opacity",".3"),t(u,"mask","url(#a)"),t(x,"fill","#80EBE2"),t(x,"d","m355.6 97.5-153.4-90a25 25 0 0 0-25.6.3L29 97.5a25 25 0 0 0-12 21.3v174.5a25 25 0 0 0 12.2 21.5l147.6 87.7a25 25 0 0 0 25.2.2l153.4-88A25 25 0 0 0 368 293V119.1a25 25 0 0 0-12.4-21.6Z"),t(w,"fill","#C4D6D6"),t(w,"d","m355.6 97.5-153.4-90a25 25 0 0 0-25.6.3L29 97.5a25 25 0 0 0-12 21.3v184l-.1.4L4 326.5l12.5-17.7a1 1 0 0 1 1.3-.3L186 408l2.5 22.2c.1 1.2 1.9 1.2 2 0L193 408l171.7-98.5a1 1 0 0 1 1.3.3l13 19.2-10.9-22.3a1 1 0 0 1-.1-.4V119a25 25 0 0 0-12.4-21.6Z"),t(p,"id","b"),t(p,"width","375"),t(p,"height","428"),t(p,"x","4"),t(p,"y","4"),t(p,"maskUnits","userSpaceOnUse"),q(p,"mask-type","alpha"),t(E,"fill","url(#c)"),t(E,"d","M235.2 360.7c46.5-71.6-249.6-263-249.6-263L-40 122s211.6 114.9 176.5 163.3c-35 48.4-217-78.3-217-78.3L-52 325.7s240.7 106.5 287.3 35Z"),t(E,"opacity",".7"),t(F,"fill","#ECFFFD"),t(F,"d","M246-47h177L226 459H49L246-47Z"),t(F,"opacity",".6"),t(g,"fill","#ECFFFD"),t(g,"d","M441.5-49H457L278.5 421H263L441.5-49Z"),t(g,"opacity",".8"),t(V,"fill","#F5FFFF"),t(V,"d","M359.5 120c9.5 27 0 162.5 10 162.5S388 99 388 99L193-27 9 82.5s-6 202.5 3.5 203 8-162 16-175 120-111 161-92S350 93 359.5 120Z"),t(Z,"fill","#E9FFFF"),t(Z,"d","M21.5 296c-14-17-8-34-8-38l-24 59 195 126 201-111.5L372 246s-7.5 39.5-15.5 52.5-124 113-165 94-156-79.5-170-96.5Z"),t(Z,"opacity",".7"),t(f,"mask","url(#b)"),t(U,"fill","#34776E"),t(U,"d","M192.6 299.8a5 5 0 0 1-.5 2.1 34.5 34.5 0 0 1-30.6 19.1h-52.6a34.5 34.5 0 0 1-30.8-19c-.4-.6-.5-1.4-.5-2v-78a5 5 0 0 1 5-5h38.8a5 5 0 0 1 5 5v42a5 5 0 0 0 5 5h7.4a5 5 0 0 0 5-5v-59a5 5 0 0 0-5-5H82.6a5 5 0 0 1-5-5V53a5 5 0 0 1 5-5h97.7a5 5 0 0 1 5 5v37.9a5 5 0 0 1-5 5h-48.9a5 5 0 0 0-5 5v42a5 5 0 0 0 5 5h30c6.9 0 13.1 1.9 18.7 5.6a33 33 0 0 1 12 13.6c.3.6.5 1.3.5 2v130.7Zm126.4 0a4 4 0 0 1-.6 2.1c-2.8 5.5-6.9 10-12.1 13.6a32.3 32.3 0 0 1-18.7 5.5h-52.5a33 33 0 0 1-18.8-5.5c-5.2-3.6-9.2-8-11.9-13.6-.3-.6-.4-1.4-.4-2V65.4c0-.7.1-1.4.4-2 3.2-6.6 8-11.5 14.3-15a6 6 0 0 1 2.3-.5h80.7a6 6 0 0 1 2.3.5 34 34 0 0 1 14.5 14.9c.3.7.4 1.4.4 2.1v234.3ZM265 269a5 5 0 0 0 5-5V101a5 5 0 0 0-5-5h-7.3a5 5 0 0 0-5 5v163a5 5 0 0 0 5 5h7.3Z"),t(A,"fill","#fff"),t(A,"d","M192.6 287.8c0 .7-.2 1.4-.5 2a34.5 34.5 0 0 1-30.6 19.2h-52.6a34.5 34.5 0 0 1-30.8-19.1 4 4 0 0 1-.5-2.1V210a5 5 0 0 1 5-5h38.8a5 5 0 0 1 5 5v42a5 5 0 0 0 5 5h7.4a5 5 0 0 0 5-5v-59.1a5 5 0 0 0-5-5H82.6a5 5 0 0 1-5-5v-142a5 5 0 0 1 5-5h97.7a5 5 0 0 1 5 5v38a5 5 0 0 1-5 5h-48.9a5 5 0 0 0-5 5v42a5 5 0 0 0 5 5h30a33 33 0 0 1 18.7 5.5 33 33 0 0 1 12 13.6c.3.6.5 1.3.5 2v130.8Zm126.4-.1c0 .8-.2 1.5-.6 2.2-2.8 5.5-6.9 10-12.1 13.5a32.3 32.3 0 0 1-18.7 5.6h-52.5a34.4 34.4 0 0 1-30.7-19.1c-.3-.7-.4-1.4-.4-2.1V53.4c0-.7.1-1.4.4-2 3.2-6.5 8-11.5 14.3-14.9.7-.4 1.5-.5 2.3-.5h80.7c.8 0 1.6.1 2.3.5a34 34 0 0 1 14.5 14.8c.3.7.4 1.4.4 2.2v234.2ZM265 257a5 5 0 0 0 5-5V88.9a5 5 0 0 0-5-5h-7.3a5 5 0 0 0-5 5V252a5 5 0 0 0 5 5h7.3Z"),t(H,"stop-color","#3F866C"),t(k,"offset","1"),t(k,"stop-color","#48967A"),t(k,"stop-opacity","0"),t(o,"id","c"),t(o,"x1","242"),t(o,"x2","-4.5"),t(o,"y1","378"),t(o,"y2","154"),t(o,"gradientUnits","userSpaceOnUse"),t(e,"xmlns","http://www.w3.org/2000/svg"),t(e,"fill","none"),t(e,"viewBox","0 0 385 450"),t(e,"width","385"),t(e,"height","450"),t(e,"class",N[0])},m(L,c){K(L,e,c),s(e,n),s(e,i),s(i,d),s(e,u),s(u,_),s(u,M),s(u,m),s(u,v),s(u,y),s(e,x),s(e,p),s(p,w),s(e,f),s(f,E),s(f,F),s(f,g),s(f,V),s(f,Z),s(e,U),s(e,A),s(e,D),s(D,o),s(o,H),s(o,k)},p(L,[c]){c&1&&t(e,"class",L[0])},i:J,o:J,d(L){L&&a(e)}}}function Q(N,e,n){let{classNames:i=""}=e;return N.$$set=d=>{"classNames"in d&&n(0,i=d.classNames)},[i]}class T extends j{constructor(e){super(),z(this,e,Q,P,I,{classNames:0})}}export{T as default};
diff --git a/spaces/huggingface/Model_Cards_Writing_Tool/test_markdown_out.py b/spaces/huggingface/Model_Cards_Writing_Tool/test_markdown_out.py
deleted file mode 100644
index bf8ea000b3c912781f3f5620c25ec4f2233c4a6e..0000000000000000000000000000000000000000
--- a/spaces/huggingface/Model_Cards_Writing_Tool/test_markdown_out.py
+++ /dev/null
@@ -1,30 +0,0 @@
-import streamlit as st
-from persist import persist, load_widget_state
-from jinja2 import Environment, FileSystemLoader
-
-def parse_into_jinja_markdown():
- env = Environment(loader=FileSystemLoader('.'), autoescape=True)
- temp = env.get_template(st.session_state.markdown_upload)
-
- return (temp.render(model_id = st.session_state["model_name"],
- the_model_description = st.session_state["model_description"],developers=st.session_state["Model_developers"],shared_by = st.session_state["shared_by"],model_license = st.session_state['license'],
- direct_use = st.session_state["Direct_Use"], downstream_use = st.session_state["Downstream_Use"],out_of_scope_use = st.session_state["Out-of-Scope_Use"],
- bias_risks_limitations = st.session_state["Model_Limits_n_Risks"], bias_recommendations = st.session_state['Recommendations'],
- model_examination = st.session_state['Model_examin'],
- hardware= st.session_state['Model_hardware'], hours_used = st.session_state['hours_used'], cloud_provider = st.session_state['Model_cloud_provider'], cloud_region = st.session_state['Model_cloud_region'], co2_emitted = st.session_state['Model_c02_emitted'],
- citation_bibtex= st.session_state["APA_citation"], citation_apa = st.session_state['bibtex_citation'],
- training_data = st.session_state['training_data'], preprocessing =st.session_state['preprocessing'], speeds_sizes_times = st.session_state['Speeds_Sizes_Times'],
- model_specs = st.session_state['Model_specs'], compute_infrastructure = st.session_state['compute_infrastructure'],software = st.session_state['technical_specs_software'],
- glossary = st.session_state['Glossary'],
- more_information = st.session_state['More_info'],
- model_card_authors = st.session_state['the_authors'],
- model_card_contact = st.session_state['Model_card_contact'],
- get_started_code =st.session_state["Model_how_to"]
- ))
-
-def main():
- st.write( parse_into_jinja_markdown())
-
-if __name__ == '__main__':
- load_widget_state()
- main()
\ No newline at end of file
diff --git a/spaces/hysts/ViTPose_video/mmdet_configs/README.md b/spaces/hysts/ViTPose_video/mmdet_configs/README.md
deleted file mode 100644
index b180151a3f1904a7636d0719aad751754dfe4a3b..0000000000000000000000000000000000000000
--- a/spaces/hysts/ViTPose_video/mmdet_configs/README.md
+++ /dev/null
@@ -1,2 +0,0 @@
-`configs.tar` is a tarball of https://github.com/open-mmlab/mmdetection/tree/v2.24.1/configs.
-The license file of the mmdetection is also included in this directory.
diff --git a/spaces/igrab666/polish_text_summarization/README.md b/spaces/igrab666/polish_text_summarization/README.md
deleted file mode 100644
index 1e8d666a8ae59ee79e1aaf648974d3e624d474dc..0000000000000000000000000000000000000000
--- a/spaces/igrab666/polish_text_summarization/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Polish_text_summarization
-emoji: 💩
-colorFrom: yellow
-colorTo: indigo
-sdk: gradio
-sdk_version: 2.9.4
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/imseldrith/BotX/Uploader/functions/__init__.py b/spaces/imseldrith/BotX/Uploader/functions/__init__.py
deleted file mode 100644
index 0a6a3a2cbab092d60ceb127b40a8c653eaad94bf..0000000000000000000000000000000000000000
--- a/spaces/imseldrith/BotX/Uploader/functions/__init__.py
+++ /dev/null
@@ -1,27 +0,0 @@
-# MIT License
-
-# Copyright (c) 2022 Hash Minner
-
-# Permission is hereby granted, free of charge, to any person obtaining a copy
-# of this software and associated documentation files (the "Software"), to deal
-# in the Software without restriction, including without limitation the rights
-# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
-# copies of the Software, and to permit persons to whom the Software is
-# furnished to do so, subject to the following conditions:
-
-# The above copyright notice and this permission notice shall be included in all
-# copies or substantial portions of the Software.
-
-# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
-# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
-# SOFTWARE
-
-from .display_progress import *
-from .help_Nekmo_ffmpeg import *
-from .help_uploadbot import *
-from .help_ytdl import *
-from .ran_text import *
diff --git a/spaces/inamXcontru/PoeticTTS/A Flying Jatt 720p Torrent Download ((EXCLUSIVE)).md b/spaces/inamXcontru/PoeticTTS/A Flying Jatt 720p Torrent Download ((EXCLUSIVE)).md
deleted file mode 100644
index 5a0a21e044e1516898fdb60a4936f8dbbc1df351..0000000000000000000000000000000000000000
--- a/spaces/inamXcontru/PoeticTTS/A Flying Jatt 720p Torrent Download ((EXCLUSIVE)).md
+++ /dev/null
@@ -1,24 +0,0 @@
-A Flying Jatt 720p Torrent Download Download Zip »»» https://gohhs.com/2uz3Vj
-
-2. Doot Chupke Chupke. Jatt. 2020. Hindi. 720p. DvDRip. 1.0 GB [www.youtube.com]. by: Rizzu7788. Publication date: 2020-07-14.
-
-3. Rumo Ki Bole Ki Chaahat. Jatt. 2020. Hindi. 720p. DvDRip. 2.1 GB [www.youtube.com]. by: Rizzu7788. Publication date: 2020-07-14.
-
-4. Tanma Tum Chaahat Ho. Jatt. 2020. Hindi. 720p. DvDRip. 5.9 GB [www.youtube.com]. by: Rizzu7788. Publication date: 2020-07-17.
-
-5. Jiyo Meri Ishq Ki Baat. Jatt. 2020. Hindi. 720p. DvDRip. 7.4 GB [www.youtube.com]. by: Rizzu7788. Publication date: 2020-07-14.
-
-6. Love Lapsi. Jatt. 2016. Hindi. 720p. DvDRip. 5.2 GB [www.youtube.com]. by: Rizzu7788. Publication date: 2020-07-17.
-
-7. Chor. Jatt. 2019. Hindi. 720p. DvDRip. 2.9 GB [www.youtube.com]. by: Rizzu7788. Publication date: 2020-07-17.
-
-8. Tu Jiya Tu. Jatt. 2020. Hindi. 720p. DvDRip. 8.3 GB [www.youtube.com]. by: Rizzu7788. Publication date: 2020-07-17.
-
-9. Milega Milega. Jatt. 2019. Hindi. 720p. DvDRip. 6.0 GB [www.youtube.com]. by: Rizzu7788. Publication date: 2020-07-14.
-
-10. Om Shanti Shanti. Jatt. 2020. Hindi. 720p. DvDRip. 6.1 GB [www.youtube.com]. by: Rizzu7788. Publication date: 2020-07-14.
-
-11. Preeto Neha Teri Manzil. Jatt. 2020. Hindi. 720p. Dv 4fefd39f24
-
-
-
diff --git a/spaces/inamXcontru/PoeticTTS/Awave Studio 106 Keygen 50 __HOT__.md b/spaces/inamXcontru/PoeticTTS/Awave Studio 106 Keygen 50 __HOT__.md
deleted file mode 100644
index a12c217eeff399269b26ea88542de99173f41e1c..0000000000000000000000000000000000000000
--- a/spaces/inamXcontru/PoeticTTS/Awave Studio 106 Keygen 50 __HOT__.md
+++ /dev/null
@@ -1,5 +0,0 @@
-
-Abandoned 1 Abandoned 2 Abandoned 3 Abandrone 1 Abandrone 2 Across the Sky Adriatic Adrift in Time Aereo Fleet Evil Aerodynamic Air Powered Subway Alias Drone Bloom Alley Stalker Aloft High Aloft Low Alpha Amalgamate Amazonia Sunset Amoeba An Extrusion From the Inside Ancient Extinction Angelic Morph Angels in the Desert Angels Whisper Anguish Texture Anguished Cries Anguished Screamers Anticipation Texture Antimatter Apocolyptic Bass Apocolyptic Drone Apparitions on the Edge Aramaic Bend Aramaic Mix Arcane Armenian Sun Arrival of the Dead Arrival of the Spirits Arwen As I Live and Breathe At the Edge of the World Atlantis - Lights in the Water Atlantis - Rapturous Beauty Atlantis - Underwater Rain Atonal Crystals in Space Azure 1 Azure 2 B-52 Bombers Babbling Brook Drone Bad Dreams Bad Nightmares Barely Holding It Together Barren World 1 Barren World 2 Barrier Reef Battle of Mordor Bells from Beyond Bending Choral Wash Bending Light Big Pink Purr Big Trouble Blister Blood Vessels Blue Encription Blur Blurred Perception 1 Blurred Perception 2 Bombarded Bondo Bowels of Eternal Damnation Bowing Titanium Brimstone Brush with Death Bubbling Over Bumblebee inside the Flute Burbbling Wires Burning Air Butterflies made of Glass Buzz Bombs Call of the North Caminaro Cascading Cascoder Castilia Cave Troll Caverns Around Steve's House Ceremony Chalkboard Swirl Chaotic Crystal Artifacts Chemical Equilibrium 1 Chemical Equilibrium 2 Chemical Equilibrium 3 Chemical Equilibrium 4 Child of the Stars Chill Factor Chloropass Choirs from an Old Gramophone Churchist Circadian Circular Speeding City in the Clouds Clandenstein Clandestine Meetings Clangorous City Cloaking Clock Shop Reverse Cluster Feedback 1 Dry Cluster Feedback 1 Wet Cluster Feedback 2 Dry Cluster Feedback 2 Wet Cobalt Distortion Colonies Columns Comet Star Coming Unglued Constant Change Contortion 1 Contortion 2 Contortion 3 Contortion 4 Contortion 5 Contrabass Blown Tubes 1 Contrabass Blown Tubes 2 Cooderfried Copper Art Class Cosmic Tingle Covert Crawling Underworld Creatures Creepy Crawlies Creepy Cricket Choir Crimson Elegy Critical Orbit Crushing Agony Crying Doppler Cryptic Crystal Cloud Crystalline Towers Cultic Cyber Bugs Cyber Flies Cyberdrone Damnation Dark Brood Dark Forest Dark Oil Drone Dark Scraper Datatube Dawning Dawnlands Daybreak Deceived Deceiver Deduks in Mourning Deep Didge Deep Opera Deep Rubbing Tubes Deep Sea Exploration Deep Wash 1 Deep Wash 2 Dervishes Desert Mirage Deserted Shipyard Desolation 1 Desolation 2 Devionics Devionize Devious Diffraction Disfigured 1 Disfigured 2 Disintegrating Crystals Disoriented Distant Cry Distant Dreams Distant Vinyl Sparkles Distressing Wires Disturbance in the Core Disturbance over the Piano Disturbing Bells Divine Dawn Reveal Divine Eternity Divine Rest 1 Divine Rest 2 Divine Restlessness Dreaming in the Jetstream Dreaming of Time Travel Dreamy Acoustic Dresden 1 Dresden 2 Dresden 3 Dubai Dusk 1 Dusk 2 Dusk 3 Dusk 4 Dusk 5 Dynamic Swirling Steel ^ Dynamic TriBowed Metals Dynamic Violence Dystopian Life Dystopian Texture E-Bowtron Earth Drone Bass Echoes of Devastation Eclipse Effervescent Egyptian Bowls Electric Blade Electrocution Wires Electromechmotion Elevator in Panic Mode Elves Embryonic High Embryonic Emergence Emerging Vision Emperor Constantine Empty Nest Drone Encription Endless Loop Endless Reflections 1 Endless Reflections 2 Engines of War Enigmatic Bowls Enigmatic Entering Unknown Territory Epileptic Seizure Equilibrium Escaping Estuarial Drone Estuary 1 Estuary 2 Estuary 3 Estuary 4 Estuary 5 Ethnic BreathPipe Evaporate Evaporating Crystals Drifting Evaporating Dreams Evil Macumba Evil Murmuring Flies Exhaltation 1 Exhaltation 2 Existence Exothermic Reaction Expectation Fairy Dust Fast Forward Motion Fathoms February Glare Feedies 1 Feedies 2 Feedies 3 Feedies 4 Feeling of Emptiness Female Morphing Feng Shui Bells Filled with the Holy Spirit Fjords Flaming Cluster Fleas 1 Fleas 2 Fleas 3 Florescent Chimes Flutter Foggy Bottom Forlorn 1 Forlorn 2 Foul Brood Franken Frog Bog Frost Fuse Swells 1 Fuse Swells 2 Fuse Swells 3 Fuse Swells 4 Fuse Future Syntheswell Fuzz Clarinet Galilean Moons Gamillusion 1 Gamillusion 2 Gamillusion 3 Gene Splicing Genetic Animals Genetic Experiments Geodesic Getting Scanned by Galaxy Cops Ghandi Drone Ghosts in the Phonograph Ghosts in the Secret City 1 Ghosts in the Secret City 2 Glare Glass Grammaphone Glasswaves 1 Glasswaves 2 Glasswaves 3 Glasswaves 4 Glasswaves 5 Glasswaves 6 Glasswells 1 Glasswells 2 Glasswells 3 Glasswells 4 Gleaming HI Gleaming Mix Glinting Glisten to Me Glistening Glow 1 Glow 2 Glow Drones Glow Worms Gnawing Gorge Granulated Fanbience Great Wall of China Gulls Gurgle Gutteral Morphing Gyroscope Gyroscopic Scraper Half a Kingdom Halo 1 Halo 2 Halo 3 Halo 4 Halo 5 Hang Rubbing Beads Hang Rubbing Circles Harmonic Hallucination Haunted Records 1 Haunted Records 2 Haunted Records 3 Haunted Records 4 Haunted Records 5 Haunted Souls in the Storm Hazy Brood Heartland 1 Heartland 2 Heartland 3 Heartland 4 Heartland 5 Help Me Hero Drone Hibernation Hideous Himalaya Holy Acoustic Horrorific Images Howler Huge Locusts Hunchback Hyperpollination Ice Castle Rolling Motion 2 Iceland 1 Iceland 2 Iceland 3 Icicle Drone Icicles 1 Icicles 2 Illuminati Immersion Immortal Fripp In a Dreamlike State In the Bowels In The Catacombs Indian Circuits Indicator Inexplicable Encounter Infrared Hallucination Inner Addictions Insects on Ice Insects Inside the Light Interference Interstellar Subway Rails Invader Glitch Invert Gliss Ireland Iridescence Iron Eyes Irritant 1 Irritant 2 Irritant 3 Irritant 4 Irritant 5 Irritant 6 Irritant 7 Ishmael Drones Mix 1 Ishmael Drones Mix 2 Ishmael Drones Mix 3 Ishmael Drones Mix 4 Ishmael Mix 1 Ishmael Mix 2 Ishmael Mix 3 Ishmael Texture 1 Ishmael Texture 2 Ishmael Texture 3 Ishmael Texture 4 Ishmael Texture 5 Island of Chimes Jake Drone Jellyfish Jet Streams Jittery Teletexture Katmandu Klaatu KMINI - Blood Boiling KMINI - Mystery Rings Hi KMINI - Mystery Rings Lo KMINI - Sinister Forces Knocking Tension Kyoto 1 Kyoto 2 Kyoto 3 Kyoto 4 Kyoto 5 Kyoto 6 Lamentation 1 Lamentation 2 Lamentation 3 Lamentation 4 Lamentation 5 Lamentation 6 Lamentation 7 Lamentation 8 Lamenting Whales Texture Land of Sonic Bliss Lapland Laredrone Latitude North 1 Latitude North 2 Latitude North 3 Latitude North 4 Latitude North 5 Latitude South 1 Latitude South 2 Latitude South 3 Latitude South 4 Laundramat Leagues Life on the Barrier Reef Liquid Crown Liquid Nitrogen 1 Liquid Nitrogen 2 Liquid Nitrogen 3 Liquid Nitrogen 4 Liquid Nitrogen 5 Liquid Nitrogen 6 Reverse Liquid Nitrogen 6 Lonely Chambers Lonely Glass Riser Lonely Riser Long Voyage Longing 1 Longing 2 Longing 3 Longing 4 Longings in Motion Lord Balrog Drone Lost Ancient City Luminize Lunar Eclipse Lunar Prison Camp Mach XII Roulette Magic Waters Magnetic Crystals Making Waves Malevolent Shadows Malice within the Fog Mantura Marsh Mire Drone Massive Rotation Meditation Disturbance Menace Mercenary Metal Bender Rumble Metal Flow Metallic Atonal Harmonics Metallic Cluster Beams 1 Metallic Cluster Beams 2 Metallic Cluster Clouds 1 Metallic Cluster Clouds 2 Metallic Glitch Probes Metallic Orbit Metallic Shards of Light Metro Micro Meteorites Miniature Biosphere Minor Realization Mission to Mars Moments of Wonder Monastery Monkey Modulator Monkoder Monster Rubber Band Moonbeams 1 Moonbeams 2 Morphing Matrix Morphing Voices Morphionic Mother Earth Mother Magnesium Mournful Acoustic Moving Some Air Murmuring Chatter My Kite Fell into the Ocean Mystic Bowls Mystic Metal Little Fairy Nebuli Nepal Never Cry New World Nightmare Flute Cue Nirvana NLEAD - Disturbing Thoughts NLEAD - Horrifying Realization NLEAD - Insidious No Admittance Noblemen Nu-Wager Nursery Rhymes NWAVE - Strange Happenings Nyquist Criterion Obelisk 1 Obelisk 2 Oblivion Oceanography 1 Oceanography 2 Oceanography 3 Oceanography 4 Oceanography 5 Oceanography 6 Oils of Suspense Omega Ominous Fear Ominous Sci-Fi Oscillators Ominousity Omniscience 1 Omniscience 2 Ooziedroids Brite Ooziedroids Dark Oozies 1 Oozies 2 Oozies 3 Oozies 4 Oozing Anxieties Opal Operawaves 1 Operawaves 2 Oracle Sheen Oracle Orbitalizer lead Orch From Hell Tuning Up Outer Limits Outglassed Overcoder Painful Realization Paradise Paradisum Paranoia Paranormal Apparition Passing Lights PAX - A Deep Sense of Peace PAX - A Greater Sense of Peace PAX - A Sense of Perfect Peace PAX - Guitar Drone PAX - Guitar Texture Pensive 1 Pensive 2 Perelandra Perilizer 1 Perilizer 2 Periscopic Perpetuity Petroleum Piano Morphing Pilgrimage 1 Pilgrimage 2 Pilgrimage 3 Pilgrimage 4 Pilgrimage 5 Pilgrimage 6 Pilgrimage 7 Pixies Planetary Pools Become Doors at Night Post Atomic Radiation Post Atomic Survivors Post Nuke Radiations 1 Post Nuke Radiations 2 Premonition Primal Howlings Primal PROX - Attaining Conciousness PROX - Disorientation Psychedelia Psychomonkeys Pulsation Punjabi Drone Purgatory Pyramid Quest Questia Quik Riser Radiate Radio Drones Radio Universe Railway to Hell Raining Castilia Lights Rattling Rumbler Reanimations Rebop Rectifier Red Ice Caverns Red Sunrise Reflecting on the Past Reflection 1 Reflection 2 Reflection 3 Reflection 4 Regal 1 Regal 2 Rejoicing 1 Rejoicing 2 Rejoicing 3 Rejoicing Bubble REM Sleep Replicator Resogong Resonation Boomerang Retro Cinema - The Vampire Revenge of the Electric Toothbrush Reversepheres Ringwraiths Rinsing Rise of the Shadow People Rising Strands Rising Sun in Death Valley Rolling Metal Balls Romeo and Juliet - Love is Life Rotating Aviary Rubbing Silver Mirrors Rumble Winds Rumblestiltskin Rumbling Threat Rushes Russian Avantgarde Flautist Rusty Swings Sagan's Journey Sanctus Sand Granules Satellite Circus Satellite Orbit Satellite Scary Trinidad School of Whales - Babies 1 School of Whales - Babies 2 School of Whales - Females 1 School of Whales - Females 2 School of Whales - Males 1 School of Whales - Males 2 School of Whales - Males 3 School of Whales - Old Ones 1 School of Whales - Old Ones 2 School of Whales - Old Ones 3 School of Whales - Old Ones 4 Scrape Chord Scraping the Waterphone Scraping the Wheels Scraping Vertigo Rumbler Sea Life Second Time on the Adriatic Secret Garden Sedation Sentry Serling Serlingini SH201 - Tension Builder Shadow of the Waterphone Shadowlands 1 Shadowlands 2 Shanghai Reverse Shanghai Shangri-La Shatter Shawshank HI Shawshank Lo Shawshank Shiny Beach Shivering Timbers Shroud Singing Stones Sinister Mystery Sinister Sinus Harmonius Sirens of Atlantis Sleep Cycle 1 Sleep Cycle 2 Sleep Cycle 3 Sleep Cycle 4 Sleet Slow Metal Breeze Slow Realization Drone Slow Worm Birth Slumbering Shaku Snorkeling Solar Flare Solitary 1 Solitary 2 Solitary Pulsation Something is About to Happen Something is Getting Closer Something is Getting Too Close Sonar Chord Sonaris Sopranos in the Mist Interval Soul Lost in the Jungle Souls in the Storm Southern Lights Soviet Flute Flutterings Soviet Flute Fluttersolo Space Mammal SpaceTimeEnergy Sparkling Pools Spectrascope Spheres Spin Cycle Spinning Omega Spirit Rumbler Spooky Wheel Choir Min->Maj Sputnik Sunrise Squeaky Wheels of Time Squeekies Stalagflites Stalagmites Stalagrites Stalagsites Stalagtites Stalker Starchimes Starspin Stasis 1 Stasis 2 Stasis 3 Stasis 4 Stay of Execution Stealth Steel Morphing Stretching The Clav Stretching the Metal Viper SUB37 - Clusterized Subsonic Attack Subterranean Towers Sudanese Evolution Sweep Sun Spot Surreal Unreal Suspended Build Suspended Suspense is Killing Me Suspenseful Minor Chord Suspension of Belief Sweet Hereafter Swerve Swimming Upstream to Spawn Swings Of The Ghost Town Swirling Crickets Symmetry Tactical Alert Taj Mahal 1 Taj Mahal 2 Talk Boxing in my Sleep Talking Pipes Taxi Driver 1 Taxi Driver 2 Temple 1 Temple 2 Temptation Tensing Pad Tension Strands The Aftermath The Eternal Why The Mirror Room The Omen's Playground The Presence The Shadow of Death Throat Culture Timbre Shifted Toy Bells Tinnibulum Tokiospace Tomatospace Tonal Carved Drum Tremolo Tormented Railway Torny Totem Toxic Aura Translucent 1 Translucent 2 Translucent 3 Translucent 4 Transmission Trouble the Waters Tubular Choir Tumble Weed Tunervox Tuning Ghost Orchestra Twinkle Clock Twinkle Toes Underground Anthill Underwater Caverns Underwater Morse Code Underworld Carnival Underworld Lamentations Unsettled Urban Angel Vapor Chimes Veil 1 Veil 2 Veil 3 Veil 4 Veil Vapor Velvet Eno Vertigo Rumbler Vibrafripp Vinyl Acetate Ghosts Vinyl Heroes Visionary Vocal Unreality Voice of the Moon Void Voltage Cluster Voyage 1 Voyage 2 Vulture VYGR - Barberpoles VYGR - Clusterphones VYGR - Clustersines VYGR - Mental Breakdown Wailing Wall Wanderlust Warped Interiors Washing Wasteland Waterfall of Broken Metals Waves of the Tigress Weeping 1 Weeping 2 Weeping 3 Weeping 4 Weeping 5 What We Need Whirlius Whirly World Whispers of Doom Whispertones Window Pane Drone Winds of Apprehension Yearning 1 Yearning 2 Yearning 3 Yearning 4 Yearning 5 Yearning Texture Yugo Zed Leper
-Awave Studio 106 Keygen 50 DOWNLOAD > https://gohhs.com/2uz4GL
aaccfb2cb3
-
-
\ No newline at end of file
diff --git a/spaces/inamXcontru/PoeticTTS/Csc Tedds 14 Keygen 31 !EXCLUSIVE!.md b/spaces/inamXcontru/PoeticTTS/Csc Tedds 14 Keygen 31 !EXCLUSIVE!.md
deleted file mode 100644
index d58ed8fc6fe1139000e8ac494a633178928706db..0000000000000000000000000000000000000000
--- a/spaces/inamXcontru/PoeticTTS/Csc Tedds 14 Keygen 31 !EXCLUSIVE!.md
+++ /dev/null
@@ -1,60 +0,0 @@
-Csc Tedds 14 Keygen 31 Download File ✸ https://gohhs.com/2uz4NF
-
-www.zipfilecracker.com
-
-To Download you must have a good internet connection. You can learn this how to download crack, serial, registration, keygen from CSC Tedds 14 Registration Code (cracked) Free Download Full Version With Crack 2010 free download
-
-Program Information:
-
-CSC Tedds 14 Serial Number : it is the latest serial number that was published on the publisher website.
-
-License Agreement:
-
-cracked versions of this software are provided under a general public license with no conditions.However, crack version of this software comes with an End-User License Agreement (EULA) that restricts user’s use of the crack product. However, crack versions are offered free of charge, and are completely free of charge. Therefore, the EULA does not apply.
-
-Legal Notice:
-
-All contents included in cracked versions are the copyrighted property of their respective owners.
-
-All contents included in the crack version are strictly for educational purposes and may be used for teaching, testing and training purposes only.
-
-All contents are for local use only and cannot be shared with any other party.
-
-All contents are provided without any warranty.
-
-File Sharing, etc.:
-
-cracked software cannot be shared with any other party.
-
-Uncompressed files can be shared with others.
-
-All crack versions are fully functional and do not contain any form of crack.
-
-Links to crack files:
-
-If a crack file is provided, a link to the crack file must be included in the description.
-
-Links to cracked software are allowed only on noncommercial forums, newsgroups, websites, etc.
-
-Any attempt to crack, hack or modify the software is strictly prohibited, as well as any attempt to copy, upload, or distribute any crack or crackless version of the crack.
-
-Software maintenance:
-
-We do not offer the crack version for maintenance or support.
-
-We are not responsible for any damage caused by the use or misuse of the crack.
-
-crack versions are offered free of charge and are fully functional and do not contain any form of crack.
-
-All crack versions are for local use only and cannot be shared with any other party.
-
-crack versions are provided without any warranty.
-
-Software instructions:
-
-All instructions and manuals found in this crack version are completely free of charge.
-
-All crack instructions are free of charge and are provided without any warranty. 4fefd39f24
-
-
-
diff --git a/spaces/inamXcontru/PoeticTTS/CyberLink YouCam Deluxe 7.0.4129.0 Pre-Cracked Full Version.md b/spaces/inamXcontru/PoeticTTS/CyberLink YouCam Deluxe 7.0.4129.0 Pre-Cracked Full Version.md
deleted file mode 100644
index bafa301954b77f7f817c924d5476c27b62eb338c..0000000000000000000000000000000000000000
--- a/spaces/inamXcontru/PoeticTTS/CyberLink YouCam Deluxe 7.0.4129.0 Pre-Cracked Full Version.md
+++ /dev/null
@@ -1,7 +0,0 @@
-
-cyberlink youcam deluxe 7 keygen is also recognizes when youre no longer in front of your computer, and can automatically lock your screen or hibernate your pc. youre able to create videos which can integrate with powerpoint projects for more interesting and dynamic presentations. and both the standard and deluxe editions now fully support hd video both in their effects, and video recording, with a new ability to save hd video in h.264 format.
-cyberlink youcam 9.1.1927.0 crack can be used to instantly turn your webcam into a fun party tool. add a webcam frame, or apply frames or filters from a library to add fun effects to your live video chat sessions. you can also apply instant effects to your webcam video while recording. instant effects allow you to add frames, filters, distortions, and emotional effects to the video image from a webcam. this allows you to supplement the video with fun and creative video effects. youcam deluxe 7.0.4129.0 crack has got 50+ effects for you to choose from. youcam allows you to record a video of what you do with your webcam. using the youcam deluxe 7 video recording tool, you can capture a video of what you do with your webcam. upload your videos directly to youtube. the dual-mode user interface simplifies the use of cyberlink youcam 9.0 crack. in the instant messaging mode, you can participate in a multi-user video session with friends using webcams, applying instant effects in real-time.0 crack includes 50+ effects for you to choose from.
-CyberLink YouCam Deluxe 7.0.4129.0 Pre-Cracked Full Version Download Zip ……… https://gohhs.com/2uz3E8
-cyberlink youcam deluxe 7.0.1511.0 is a superior software which can be used as safety software since its got a face detection characteristic which is utilized in face login module. you too can obtain fscamview.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Ansys 12.1 64 Bit License Generator.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Ansys 12.1 64 Bit License Generator.md
deleted file mode 100644
index a3c795acaf809e6660e158f351aadc46d7bcb337..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Ansys 12.1 64 Bit License Generator.md
+++ /dev/null
@@ -1,6 +0,0 @@
-ansys 12.1 64 bit license generator DOWNLOAD ✒ https://urlin.us/2uEySa
-
-Please like and subscribe to our channelMy email id - autofuse360@gmail.com. 1fdad05405
-
-
-
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Ansys Products V1507 3264 BitMAGNi.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Ansys Products V1507 3264 BitMAGNi.md
deleted file mode 100644
index 30836f49cff602211ec7742b499c79c2ef4c2f8c..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Ansys Products V1507 3264 BitMAGNi.md
+++ /dev/null
@@ -1,34 +0,0 @@
-
-Ansys Products V1507 3264 BitMAGNi: A Powerful Simulation and Design Software
-Ansys Products V1507 3264 BitMAGNi is a software package that offers engineering simulation and 3D design solutions for various product modeling applications. It includes tools for structural analysis, fluid dynamics, electromagnetics, optimization, multiphysics, and more. Ansys Products V1507 3264 BitMAGNi can help engineers and designers to create innovative products that meet performance, reliability, and safety requirements[^1^] [^2^].
-Some of the features of Ansys Products V1507 3264 BitMAGNi are:
-Ansys Products V1507 3264 BitMAGNi DOWNLOAD 🆗 https://urlin.us/2uEw4j
-
-It supports both 32-bit and 64-bit operating systems, allowing users to handle large and complex models with ease.
-It has a user-friendly interface that integrates various Ansys products and modules, such as Ansys Mechanical, Ansys Fluent, Ansys Maxwell, Ansys DesignXplorer, and more.
-It enables users to perform parametric studies, design exploration, optimization, and verification using advanced algorithms and methods.
-It allows users to collaborate and share data across different platforms and disciplines using Ansys Workbench and Ansys EKM.
-It provides comprehensive documentation and tutorials for users to learn and master the software.
-
-Ansys Products V1507 3264 BitMAGNi is available for download from the official website of Ansys or from authorized distributors. Users can also request a free trial version or a demonstration of the software. Ansys Products V1507 3264 BitMAGNi is a powerful simulation and design software that can help users to achieve their product development goals[^1^] [^2^] [^3^].
-
-Ansys Products V1507 3264 BitMAGNi has received positive feedback from users who have used it for various engineering and design projects. Some of the benefits that users have reported are:
-
-It improves the accuracy and efficiency of simulations and designs by using high-performance computing and parallel processing capabilities.
-It reduces the cost and time of product development by enabling users to test and validate multiple scenarios and alternatives in a virtual environment.
-It enhances the creativity and innovation of users by providing them with a wide range of tools and options to explore different design possibilities and solutions.
-It increases the competitiveness and profitability of users by helping them to deliver high-quality products that meet or exceed customer expectations and industry standards.
-
-Ansys Products V1507 3264 BitMAGNi is a software package that has been proven to be effective and reliable for various engineering and design applications. Users can find more information and reviews about the software on the official website of Ansys or on online forums and platforms[^1^].
-
-Ansys Products V1507 3264 BitMAGNi has several features that make it stand out from other simulation and design software packages. Some of these features are:
-
-It supports a variety of engineering disciplines and applications, such as aerospace, automotive, biomedical, civil, electrical, mechanical, and more.
-It integrates with other software platforms and tools, such as CAD, CAM, CAE, MATLAB, Python, Excel, and more.
-It offers a flexible and customizable workflow that allows users to adapt the software to their specific needs and preferences.
-It provides a rich and interactive visualization environment that enables users to view and manipulate their models and results in 3D.
-It incorporates the latest technologies and innovations in simulation and design, such as artificial intelligence, machine learning, cloud computing, and more.
-
-Ansys Products V1507 3264 BitMAGNi is a software package that offers a comprehensive and versatile solution for engineering simulation and design. Users can benefit from the software's features and capabilities to create products that are efficient, reliable, and sustainable[^1^] [^2^] [^3^].
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Contraband Police Offline Activation Keygenl [HOT].md b/spaces/inplisQlawa/anything-midjourney-v4-1/Contraband Police Offline Activation Keygenl [HOT].md
deleted file mode 100644
index ae13aae186d4e733e0b89dba083d6780174ddf86..0000000000000000000000000000000000000000
--- a/spaces/inplisQlawa/anything-midjourney-v4-1/Contraband Police Offline Activation Keygenl [HOT].md
+++ /dev/null
@@ -1,10 +0,0 @@
-
-to work as a policeman, you must be a british citizen. you can apply for a job as a police officer in england, scotland, wales, or northern ireland. the job must be in the police service, which includes the police, the council police, and the national health service police.
-Contraband Police Offline Activation Keygenl Download ››››› https://urlin.us/2uEyMc
-cybercriminals are increasingly targeting individuals and companies across a variety of industries. police agree that cybersecurity is one of the fastest growing areas of crime. the main objective of the new law is to provide u.s. businesses with the tools they need to better defend their data.
-the participants venture is to discover drivers and check the listing of products transported throughout the border. in other words, is important, inter alia, to have a look at the package, to test that the whole thing looks because it must. feel like a actual parent and full version contraband police reloaded to prevent unlawful smuggling. write a price ticket appropriate for the disasters found. therefore, you are accountable for allowing defective cars to skip the border.
-any juvenile (a minor younger than 18) who knowingly and without legal justification possesses a controlled or illegal substance can be charged with juvenile drug possession. the offense level depends on the state law and the amount and type of drugs in the minor's possession. these charges might arise after, for example, a police officer pulls over a juvenile's vehicle and notices marijuana in the car, discovers drugs after searching the vehicle, or discovers drugs while interrogating the driver.
-
-similarly, search for smuggled alcohol, tablets, guns and other unlawful objects hidden interior vehicle components or shipment. in conclusion, decide whether to file determined contraband or take a bribe and let the smuggler through. gather police recognize on your wonderful paintings or lose it for mistakes. after that, spend your factors on submit upgrades like new inspection equipment, barriers, k-9 dog and lots of extra. above all, new day can carry a few surprising regulations for vehicles like max emission of exhaust gases, overall weight or banned shipment. in addition, some of the products carried, consisting of weapons or pills, may be hidden below the hood of the auto, as well as different unusual locations. for instance, the players project is, amongst other things, to test whether the contraband has not been hidden, as an instance in a battery. permanent duties also include analyzing the technical condition of the machines.
899543212b
-
-
\ No newline at end of file
diff --git a/spaces/inreVtussa/clothingai/Examples/Bud Redhead The Time Chase 1.3 Keygen [UPD].md b/spaces/inreVtussa/clothingai/Examples/Bud Redhead The Time Chase 1.3 Keygen [UPD].md
deleted file mode 100644
index 702525453f9a8d84e9b58fd0fcaa873767e8deb7..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Bud Redhead The Time Chase 1.3 Keygen [UPD].md
+++ /dev/null
@@ -1,6 +0,0 @@
-bud redhead the time chase 1.3 keygen DOWNLOAD 🌟 https://tiurll.com/2uCjLf
-
- d5da3c52bf
-
-
-
diff --git a/spaces/inreVtussa/clothingai/Examples/Dead Island Save Editor Premium !!TOP!!.md b/spaces/inreVtussa/clothingai/Examples/Dead Island Save Editor Premium !!TOP!!.md
deleted file mode 100644
index 3a4f4e72dc5dd992346c5e706f91adbd0fde1ebe..0000000000000000000000000000000000000000
--- a/spaces/inreVtussa/clothingai/Examples/Dead Island Save Editor Premium !!TOP!!.md
+++ /dev/null
@@ -1,58 +0,0 @@
-
-How to Use Dead Island Save Editor Premium to Enhance Your Gaming Experience
-
-Dead Island is a popular zombie survival game that lets you explore a tropical island full of undead horrors. But what if you want to customize your character, weapons, skills, and inventory to suit your playstyle and preferences? That's where Dead Island Save Editor Premium comes in.
-
-Dead Island Save Editor Premium (DISEP) is a powerful tool that allows you to modify your save files and get access to various features and improvements. You can use it to:
-Dead Island Save Editor Premium Download File ————— https://tiurll.com/2uCiLW
-
-
-Change your character's name, level, skills, stats, and appearance.
-Edit your inventory items, mod blueprints, collectibles, and money.
-Create custom weapons with different elements, damage, durability, and effects.
-Unlock all skill trees and perks for any character.
-Increase your XP multipliers, looted money, item value, and ammo capacity.
-Adjust the game difficulty, enemy spawn rate, and loot quality.
-And much more!
-
-
-DISEP is compatible with both the original Dead Island and the Definitive Edition, as well as the Riptide expansion. It also supports multiple languages and platforms. You can download it for free from Steffen L's website [^1^] or from Nexus Mods [^2^]. You can also find modded save files created by other users on these sites.
-
-To use DISEP, you need to locate your save files on your computer. They are usually found in C:\\Program Files (x86)\\Steam\\userdata\\\\\\remote\\out\\save. You can then open them with DISEP and make any changes you want. Remember to backup your original save files before editing them, in case something goes wrong or you want to revert to them later.
-
-DISEP is easy to use and has a user-friendly interface. You can browse through different tabs and menus to access different features and options. You can also use the help button or visit the Steam Community Guide [^3^] for more information and tips on how to use DISEP effectively.
-
-With DISEP, you can enhance your gaming experience and enjoy Dead Island in new and exciting ways. Whether you want to hack/cheat the game, respec your character's skills, or reduce some unnecessary grind and annoyances, DISEP can help you achieve your goals. Try it out today and see for yourself!
-
-How to Install Dead Island Save Editor Premium
-
-If you want to use DISEP, you need to install it on your computer first. There are two ways to do this: using the installer or using the zip archive. Here are the steps for each method:
-
-Using the Installer
-
-
-Download the installer (setup) from Steffen L's website [^1^] or from Nexus Mods [^2^].
-Run the setup program. If Windows blocks you from running it to protect your computer, this happens because the setup program is not digitally signed. As long as the hash of the file matches the hash found on the DISE website then it is safe to tap "More info" and run it anyway.
-Choose whether you want to install DISEP for your Windows user account only or for all users.
-Choose the components you wish to install. A typical or complete install is recommended for the best user experience.
-If you are upgrading DISEP and wish to start fresh, you can tick the option to delete the preferences that were set before in DISEP.
-At the end of the installation, you can launch DISEP before existing the installer, from the start menu under the folder "Steffen L".
-
-
-Using the Zip Archive
-
-
-Download one of the zip archives from Steffen L's website [^1^] or from Nexus Mods [^2^]. There are multiple versions of the zip archives which contain different things depending on your needs:
-
--full.zip contains everything that the installer contains to get the full experience, i.e. all assets and languages.
--typical.zip contains everything English-speakers need to get the full experience.
--minimal.zip contains the bare minimum for DISEP to work, i.e. without optional assets and additional languages.
-
-Extract one of the zip archives to your desired location.
-If you want to use DISEP in portable mode, create a file named portable in the same directory as the DISE executable (dise.exe) and user preferences will be stored in userpref.dat in the same directory as the executable.
-Run dise.exe to launch DISEP.
-
-
-Once you have installed DISEP, you can start modifying your save files and enjoy its features and improvements.
d5da3c52bf
-
-
\ No newline at end of file
diff --git a/spaces/ivntl/MMS/vits/data_utils.py b/spaces/ivntl/MMS/vits/data_utils.py
deleted file mode 100644
index 4855699d23d5dee36d4a12e875c7465265caac0f..0000000000000000000000000000000000000000
--- a/spaces/ivntl/MMS/vits/data_utils.py
+++ /dev/null
@@ -1,392 +0,0 @@
-import time
-import os
-import random
-import numpy as np
-import torch
-import torch.utils.data
-
-import commons
-from mel_processing import spectrogram_torch
-from utils import load_wav_to_torch, load_filepaths_and_text
-from text import text_to_sequence, cleaned_text_to_sequence
-
-
-class TextAudioLoader(torch.utils.data.Dataset):
- """
- 1) loads audio, text pairs
- 2) normalizes text and converts them to sequences of integers
- 3) computes spectrograms from audio files.
- """
- def __init__(self, audiopaths_and_text, hparams):
- self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text)
- self.text_cleaners = hparams.text_cleaners
- self.max_wav_value = hparams.max_wav_value
- self.sampling_rate = hparams.sampling_rate
- self.filter_length = hparams.filter_length
- self.hop_length = hparams.hop_length
- self.win_length = hparams.win_length
- self.sampling_rate = hparams.sampling_rate
-
- self.cleaned_text = getattr(hparams, "cleaned_text", False)
-
- self.add_blank = hparams.add_blank
- self.min_text_len = getattr(hparams, "min_text_len", 1)
- self.max_text_len = getattr(hparams, "max_text_len", 190)
-
- random.seed(1234)
- random.shuffle(self.audiopaths_and_text)
- self._filter()
-
-
- def _filter(self):
- """
- Filter text & store spec lengths
- """
- # Store spectrogram lengths for Bucketing
- # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2)
- # spec_length = wav_length // hop_length
-
- audiopaths_and_text_new = []
- lengths = []
- for audiopath, text in self.audiopaths_and_text:
- if self.min_text_len <= len(text) and len(text) <= self.max_text_len:
- audiopaths_and_text_new.append([audiopath, text])
- lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length))
- self.audiopaths_and_text = audiopaths_and_text_new
- self.lengths = lengths
-
- def get_audio_text_pair(self, audiopath_and_text):
- # separate filename and text
- audiopath, text = audiopath_and_text[0], audiopath_and_text[1]
- text = self.get_text(text)
- spec, wav = self.get_audio(audiopath)
- return (text, spec, wav)
-
- def get_audio(self, filename):
- audio, sampling_rate = load_wav_to_torch(filename)
- if sampling_rate != self.sampling_rate:
- raise ValueError("{} {} SR doesn't match target {} SR".format(
- sampling_rate, self.sampling_rate))
- audio_norm = audio / self.max_wav_value
- audio_norm = audio_norm.unsqueeze(0)
- spec_filename = filename.replace(".wav", ".spec.pt")
- if os.path.exists(spec_filename):
- spec = torch.load(spec_filename)
- else:
- spec = spectrogram_torch(audio_norm, self.filter_length,
- self.sampling_rate, self.hop_length, self.win_length,
- center=False)
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename)
- return spec, audio_norm
-
- def get_text(self, text):
- if self.cleaned_text:
- text_norm = cleaned_text_to_sequence(text)
- else:
- text_norm = text_to_sequence(text, self.text_cleaners)
- if self.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = torch.LongTensor(text_norm)
- return text_norm
-
- def __getitem__(self, index):
- return self.get_audio_text_pair(self.audiopaths_and_text[index])
-
- def __len__(self):
- return len(self.audiopaths_and_text)
-
-
-class TextAudioCollate():
- """ Zero-pads model inputs and targets
- """
- def __init__(self, return_ids=False):
- self.return_ids = return_ids
-
- def __call__(self, batch):
- """Collate's training batch from normalized text and aduio
- PARAMS
- ------
- batch: [text_normalized, spec_normalized, wav_normalized]
- """
- # Right zero-pad all one-hot text sequences to max input length
- _, ids_sorted_decreasing = torch.sort(
- torch.LongTensor([x[1].size(1) for x in batch]),
- dim=0, descending=True)
-
- max_text_len = max([len(x[0]) for x in batch])
- max_spec_len = max([x[1].size(1) for x in batch])
- max_wav_len = max([x[2].size(1) for x in batch])
-
- text_lengths = torch.LongTensor(len(batch))
- spec_lengths = torch.LongTensor(len(batch))
- wav_lengths = torch.LongTensor(len(batch))
-
- text_padded = torch.LongTensor(len(batch), max_text_len)
- spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len)
- wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len)
- text_padded.zero_()
- spec_padded.zero_()
- wav_padded.zero_()
- for i in range(len(ids_sorted_decreasing)):
- row = batch[ids_sorted_decreasing[i]]
-
- text = row[0]
- text_padded[i, :text.size(0)] = text
- text_lengths[i] = text.size(0)
-
- spec = row[1]
- spec_padded[i, :, :spec.size(1)] = spec
- spec_lengths[i] = spec.size(1)
-
- wav = row[2]
- wav_padded[i, :, :wav.size(1)] = wav
- wav_lengths[i] = wav.size(1)
-
- if self.return_ids:
- return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, ids_sorted_decreasing
- return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths
-
-
-"""Multi speaker version"""
-class TextAudioSpeakerLoader(torch.utils.data.Dataset):
- """
- 1) loads audio, speaker_id, text pairs
- 2) normalizes text and converts them to sequences of integers
- 3) computes spectrograms from audio files.
- """
- def __init__(self, audiopaths_sid_text, hparams):
- self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text)
- self.text_cleaners = hparams.text_cleaners
- self.max_wav_value = hparams.max_wav_value
- self.sampling_rate = hparams.sampling_rate
- self.filter_length = hparams.filter_length
- self.hop_length = hparams.hop_length
- self.win_length = hparams.win_length
- self.sampling_rate = hparams.sampling_rate
-
- self.cleaned_text = getattr(hparams, "cleaned_text", False)
-
- self.add_blank = hparams.add_blank
- self.min_text_len = getattr(hparams, "min_text_len", 1)
- self.max_text_len = getattr(hparams, "max_text_len", 190)
-
- random.seed(1234)
- random.shuffle(self.audiopaths_sid_text)
- self._filter()
-
- def _filter(self):
- """
- Filter text & store spec lengths
- """
- # Store spectrogram lengths for Bucketing
- # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2)
- # spec_length = wav_length // hop_length
-
- audiopaths_sid_text_new = []
- lengths = []
- for audiopath, sid, text in self.audiopaths_sid_text:
- if self.min_text_len <= len(text) and len(text) <= self.max_text_len:
- audiopaths_sid_text_new.append([audiopath, sid, text])
- lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length))
- self.audiopaths_sid_text = audiopaths_sid_text_new
- self.lengths = lengths
-
- def get_audio_text_speaker_pair(self, audiopath_sid_text):
- # separate filename, speaker_id and text
- audiopath, sid, text = audiopath_sid_text[0], audiopath_sid_text[1], audiopath_sid_text[2]
- text = self.get_text(text)
- spec, wav = self.get_audio(audiopath)
- sid = self.get_sid(sid)
- return (text, spec, wav, sid)
-
- def get_audio(self, filename):
- audio, sampling_rate = load_wav_to_torch(filename)
- if sampling_rate != self.sampling_rate:
- raise ValueError("{} {} SR doesn't match target {} SR".format(
- sampling_rate, self.sampling_rate))
- audio_norm = audio / self.max_wav_value
- audio_norm = audio_norm.unsqueeze(0)
- spec_filename = filename.replace(".wav", ".spec.pt")
- if os.path.exists(spec_filename):
- spec = torch.load(spec_filename)
- else:
- spec = spectrogram_torch(audio_norm, self.filter_length,
- self.sampling_rate, self.hop_length, self.win_length,
- center=False)
- spec = torch.squeeze(spec, 0)
- torch.save(spec, spec_filename)
- return spec, audio_norm
-
- def get_text(self, text):
- if self.cleaned_text:
- text_norm = cleaned_text_to_sequence(text)
- else:
- text_norm = text_to_sequence(text, self.text_cleaners)
- if self.add_blank:
- text_norm = commons.intersperse(text_norm, 0)
- text_norm = torch.LongTensor(text_norm)
- return text_norm
-
- def get_sid(self, sid):
- sid = torch.LongTensor([int(sid)])
- return sid
-
- def __getitem__(self, index):
- return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index])
-
- def __len__(self):
- return len(self.audiopaths_sid_text)
-
-
-class TextAudioSpeakerCollate():
- """ Zero-pads model inputs and targets
- """
- def __init__(self, return_ids=False):
- self.return_ids = return_ids
-
- def __call__(self, batch):
- """Collate's training batch from normalized text, audio and speaker identities
- PARAMS
- ------
- batch: [text_normalized, spec_normalized, wav_normalized, sid]
- """
- # Right zero-pad all one-hot text sequences to max input length
- _, ids_sorted_decreasing = torch.sort(
- torch.LongTensor([x[1].size(1) for x in batch]),
- dim=0, descending=True)
-
- max_text_len = max([len(x[0]) for x in batch])
- max_spec_len = max([x[1].size(1) for x in batch])
- max_wav_len = max([x[2].size(1) for x in batch])
-
- text_lengths = torch.LongTensor(len(batch))
- spec_lengths = torch.LongTensor(len(batch))
- wav_lengths = torch.LongTensor(len(batch))
- sid = torch.LongTensor(len(batch))
-
- text_padded = torch.LongTensor(len(batch), max_text_len)
- spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len)
- wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len)
- text_padded.zero_()
- spec_padded.zero_()
- wav_padded.zero_()
- for i in range(len(ids_sorted_decreasing)):
- row = batch[ids_sorted_decreasing[i]]
-
- text = row[0]
- text_padded[i, :text.size(0)] = text
- text_lengths[i] = text.size(0)
-
- spec = row[1]
- spec_padded[i, :, :spec.size(1)] = spec
- spec_lengths[i] = spec.size(1)
-
- wav = row[2]
- wav_padded[i, :, :wav.size(1)] = wav
- wav_lengths[i] = wav.size(1)
-
- sid[i] = row[3]
-
- if self.return_ids:
- return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, ids_sorted_decreasing
- return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid
-
-
-class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler):
- """
- Maintain similar input lengths in a batch.
- Length groups are specified by boundaries.
- Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}.
-
- It removes samples which are not included in the boundaries.
- Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded.
- """
- def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True):
- super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)
- self.lengths = dataset.lengths
- self.batch_size = batch_size
- self.boundaries = boundaries
-
- self.buckets, self.num_samples_per_bucket = self._create_buckets()
- self.total_size = sum(self.num_samples_per_bucket)
- self.num_samples = self.total_size // self.num_replicas
-
- def _create_buckets(self):
- buckets = [[] for _ in range(len(self.boundaries) - 1)]
- for i in range(len(self.lengths)):
- length = self.lengths[i]
- idx_bucket = self._bisect(length)
- if idx_bucket != -1:
- buckets[idx_bucket].append(i)
-
- for i in range(len(buckets) - 1, 0, -1):
- if len(buckets[i]) == 0:
- buckets.pop(i)
- self.boundaries.pop(i+1)
-
- num_samples_per_bucket = []
- for i in range(len(buckets)):
- len_bucket = len(buckets[i])
- total_batch_size = self.num_replicas * self.batch_size
- rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size
- num_samples_per_bucket.append(len_bucket + rem)
- return buckets, num_samples_per_bucket
-
- def __iter__(self):
- # deterministically shuffle based on epoch
- g = torch.Generator()
- g.manual_seed(self.epoch)
-
- indices = []
- if self.shuffle:
- for bucket in self.buckets:
- indices.append(torch.randperm(len(bucket), generator=g).tolist())
- else:
- for bucket in self.buckets:
- indices.append(list(range(len(bucket))))
-
- batches = []
- for i in range(len(self.buckets)):
- bucket = self.buckets[i]
- len_bucket = len(bucket)
- ids_bucket = indices[i]
- num_samples_bucket = self.num_samples_per_bucket[i]
-
- # add extra samples to make it evenly divisible
- rem = num_samples_bucket - len_bucket
- ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)]
-
- # subsample
- ids_bucket = ids_bucket[self.rank::self.num_replicas]
-
- # batching
- for j in range(len(ids_bucket) // self.batch_size):
- batch = [bucket[idx] for idx in ids_bucket[j*self.batch_size:(j+1)*self.batch_size]]
- batches.append(batch)
-
- if self.shuffle:
- batch_ids = torch.randperm(len(batches), generator=g).tolist()
- batches = [batches[i] for i in batch_ids]
- self.batches = batches
-
- assert len(self.batches) * self.batch_size == self.num_samples
- return iter(self.batches)
-
- def _bisect(self, x, lo=0, hi=None):
- if hi is None:
- hi = len(self.boundaries) - 1
-
- if hi > lo:
- mid = (hi + lo) // 2
- if self.boundaries[mid] < x and x <= self.boundaries[mid+1]:
- return mid
- elif x <= self.boundaries[mid]:
- return self._bisect(x, lo, mid)
- else:
- return self._bisect(x, mid + 1, hi)
- else:
- return -1
-
- def __len__(self):
- return self.num_samples // self.batch_size
diff --git a/spaces/jbochi/Candle-CoEdIT-Wasm/utils.js b/spaces/jbochi/Candle-CoEdIT-Wasm/utils.js
deleted file mode 100644
index d59261a74d3817cb0c4471944690190ff814073d..0000000000000000000000000000000000000000
--- a/spaces/jbochi/Candle-CoEdIT-Wasm/utils.js
+++ /dev/null
@@ -1,163 +0,0 @@
-export async function extractEmbeddings(
- worker,
- weightsURL,
- tokenizerURL,
- configURL,
- modelID,
- sentences,
- updateStatus,
- normalize_embeddings = true
-) {
- return new Promise((resolve, reject) => {
- worker.postMessage({
- weightsURL,
- tokenizerURL,
- configURL,
- modelID,
- sentences,
- normalize_embeddings,
- });
- function messageHandler(event) {
- if ("error" in event.data) {
- worker.removeEventListener("message", messageHandler);
- reject(new Error(event.data.error));
- }
- if (event.data.status === "complete") {
- worker.removeEventListener("message", messageHandler);
- resolve(event.data);
- }
- if (updateStatus) updateStatus(event.data);
- }
- worker.addEventListener("message", messageHandler);
- });
-}
-
-export async function generateText(
- worker,
- weightsURL,
- tokenizerURL,
- configURL,
- modelID,
- prompt,
- params,
- updateStatus
-) {
- return new Promise((resolve, reject) => {
- worker.postMessage({
- weightsURL,
- tokenizerURL,
- configURL,
- modelID,
- prompt,
- params,
- });
- function messageHandler(event) {
- if ("error" in event.data) {
- worker.removeEventListener("message", messageHandler);
- reject(new Error(event.data.error));
- }
- if (event.data.status === "complete") {
- worker.removeEventListener("message", messageHandler);
- resolve(event.data);
- }
- if (updateStatus) updateStatus(event.data);
- }
- worker.addEventListener("message", messageHandler);
- });
-}
-
-const TASKS = {
- fluency: {
- prefix: "Fix the grammar: ",
- max_length: 300,
- },
- coherence: {
- prefix: "Rewrite to make this easier to understand: ",
- max_length: 300,
- },
- simplification: {
- prefix: "translate English to Romanian: ",
- max_length: 300,
- },
- simplification: {
- prefix: "Paraphrase this: ",
- max_length: 300,
- },
- formalization: {
- prefix: "Write this more formally: ",
- max_length: 300,
- },
- neutralize: {
- prefix: "Write in a more neutral way: ",
- max_length: 300,
- },
-};
-
-export const MODELS = {
- coedit_large_quantized_4k: {
- size: "441 MB",
- base_url: "https://huggingface.co/jbochi/candle-coedit-quantized/resolve/main/",
- model: "model-q4k.gguf",
- tokenizer: "tokenizer.json",
- config: "config.json",
- tasks: TASKS,
- },
- coedit_large_quantized_4_0: {
- size: "441 MB",
- base_url: "https://huggingface.co/jbochi/candle-coedit-quantized/resolve/main/",
- model: "model-q4_0.gguf",
- tokenizer: "tokenizer.json",
- config: "config.json",
- tasks: TASKS,
- },
- coedit_large_quantized_6k: {
- size: "643 MB",
- base_url: "https://huggingface.co/jbochi/candle-coedit-quantized/resolve/main/",
- model: "model.gguf",
- tokenizer: "tokenizer.json",
- config: "config.json",
- tasks: TASKS,
- },
- coedit_xl_quantized_4k: {
- size: "1.6 GB",
- base_url: "https://huggingface.co/jbochi/candle-coedit-quantized/resolve/main/",
- model: "model-xl-q4k.gguf",
- tokenizer: "tokenizer.json",
- config: "config-xl.json",
- tasks: TASKS,
- },
- coedit_xl_quantized_4_0: {
- size: "1.6 GB",
- base_url: "https://huggingface.co/jbochi/candle-coedit-quantized/resolve/main/",
- model: "model-xl-q4_0.gguf",
- tokenizer: "tokenizer.json",
- config: "config.json",
- tasks: TASKS,
- },
- coedit_xl_quantized_6k: {
- size: "2.34 GB",
- base_url: "https://huggingface.co/jbochi/candle-coedit-quantized/resolve/main/",
- model: "model-xl.gguf",
- tokenizer: "tokenizer.json",
- config: "config-xl.json",
- tasks: TASKS,
- },
- coedit_large: {
- size: "3.13 GB",
- base_url: "https://huggingface.co/grammarly/coedit-large/resolve/main/",
- model: "model.safetensors",
- tokenizer: "tokenizer.json",
- config: "config.json",
- tasks: TASKS,
- },
-};
-
-export function getModelInfo(id, taskID) {
- const model = MODELS[id];
- return {
- modelURL: model.base_url + model.model,
- configURL: model.base_url + model.config,
- tokenizerURL: model.base_url + model.tokenizer,
- maxLength: model.tasks[taskID].max_length,
- };
-};
diff --git a/spaces/jgurzoni/image_background_swapper/saicinpainting/training/losses/style_loss.py b/spaces/jgurzoni/image_background_swapper/saicinpainting/training/losses/style_loss.py
deleted file mode 100644
index 0bb42d7fbc5d17a47bec7365889868505f5fdfb5..0000000000000000000000000000000000000000
--- a/spaces/jgurzoni/image_background_swapper/saicinpainting/training/losses/style_loss.py
+++ /dev/null
@@ -1,155 +0,0 @@
-import torch
-import torch.nn as nn
-import torchvision.models as models
-
-
-class PerceptualLoss(nn.Module):
- r"""
- Perceptual loss, VGG-based
- https://arxiv.org/abs/1603.08155
- https://github.com/dxyang/StyleTransfer/blob/master/utils.py
- """
-
- def __init__(self, weights=[1.0, 1.0, 1.0, 1.0, 1.0]):
- super(PerceptualLoss, self).__init__()
- self.add_module('vgg', VGG19())
- self.criterion = torch.nn.L1Loss()
- self.weights = weights
-
- def __call__(self, x, y):
- # Compute features
- x_vgg, y_vgg = self.vgg(x), self.vgg(y)
-
- content_loss = 0.0
- content_loss += self.weights[0] * self.criterion(x_vgg['relu1_1'], y_vgg['relu1_1'])
- content_loss += self.weights[1] * self.criterion(x_vgg['relu2_1'], y_vgg['relu2_1'])
- content_loss += self.weights[2] * self.criterion(x_vgg['relu3_1'], y_vgg['relu3_1'])
- content_loss += self.weights[3] * self.criterion(x_vgg['relu4_1'], y_vgg['relu4_1'])
- content_loss += self.weights[4] * self.criterion(x_vgg['relu5_1'], y_vgg['relu5_1'])
-
-
- return content_loss
-
-
-class VGG19(torch.nn.Module):
- def __init__(self):
- super(VGG19, self).__init__()
- features = models.vgg19(pretrained=True).features
- self.relu1_1 = torch.nn.Sequential()
- self.relu1_2 = torch.nn.Sequential()
-
- self.relu2_1 = torch.nn.Sequential()
- self.relu2_2 = torch.nn.Sequential()
-
- self.relu3_1 = torch.nn.Sequential()
- self.relu3_2 = torch.nn.Sequential()
- self.relu3_3 = torch.nn.Sequential()
- self.relu3_4 = torch.nn.Sequential()
-
- self.relu4_1 = torch.nn.Sequential()
- self.relu4_2 = torch.nn.Sequential()
- self.relu4_3 = torch.nn.Sequential()
- self.relu4_4 = torch.nn.Sequential()
-
- self.relu5_1 = torch.nn.Sequential()
- self.relu5_2 = torch.nn.Sequential()
- self.relu5_3 = torch.nn.Sequential()
- self.relu5_4 = torch.nn.Sequential()
-
- for x in range(2):
- self.relu1_1.add_module(str(x), features[x])
-
- for x in range(2, 4):
- self.relu1_2.add_module(str(x), features[x])
-
- for x in range(4, 7):
- self.relu2_1.add_module(str(x), features[x])
-
- for x in range(7, 9):
- self.relu2_2.add_module(str(x), features[x])
-
- for x in range(9, 12):
- self.relu3_1.add_module(str(x), features[x])
-
- for x in range(12, 14):
- self.relu3_2.add_module(str(x), features[x])
-
- for x in range(14, 16):
- self.relu3_2.add_module(str(x), features[x])
-
- for x in range(16, 18):
- self.relu3_4.add_module(str(x), features[x])
-
- for x in range(18, 21):
- self.relu4_1.add_module(str(x), features[x])
-
- for x in range(21, 23):
- self.relu4_2.add_module(str(x), features[x])
-
- for x in range(23, 25):
- self.relu4_3.add_module(str(x), features[x])
-
- for x in range(25, 27):
- self.relu4_4.add_module(str(x), features[x])
-
- for x in range(27, 30):
- self.relu5_1.add_module(str(x), features[x])
-
- for x in range(30, 32):
- self.relu5_2.add_module(str(x), features[x])
-
- for x in range(32, 34):
- self.relu5_3.add_module(str(x), features[x])
-
- for x in range(34, 36):
- self.relu5_4.add_module(str(x), features[x])
-
- # don't need the gradients, just want the features
- for param in self.parameters():
- param.requires_grad = False
-
- def forward(self, x):
- relu1_1 = self.relu1_1(x)
- relu1_2 = self.relu1_2(relu1_1)
-
- relu2_1 = self.relu2_1(relu1_2)
- relu2_2 = self.relu2_2(relu2_1)
-
- relu3_1 = self.relu3_1(relu2_2)
- relu3_2 = self.relu3_2(relu3_1)
- relu3_3 = self.relu3_3(relu3_2)
- relu3_4 = self.relu3_4(relu3_3)
-
- relu4_1 = self.relu4_1(relu3_4)
- relu4_2 = self.relu4_2(relu4_1)
- relu4_3 = self.relu4_3(relu4_2)
- relu4_4 = self.relu4_4(relu4_3)
-
- relu5_1 = self.relu5_1(relu4_4)
- relu5_2 = self.relu5_2(relu5_1)
- relu5_3 = self.relu5_3(relu5_2)
- relu5_4 = self.relu5_4(relu5_3)
-
- out = {
- 'relu1_1': relu1_1,
- 'relu1_2': relu1_2,
-
- 'relu2_1': relu2_1,
- 'relu2_2': relu2_2,
-
- 'relu3_1': relu3_1,
- 'relu3_2': relu3_2,
- 'relu3_3': relu3_3,
- 'relu3_4': relu3_4,
-
- 'relu4_1': relu4_1,
- 'relu4_2': relu4_2,
- 'relu4_3': relu4_3,
- 'relu4_4': relu4_4,
-
- 'relu5_1': relu5_1,
- 'relu5_2': relu5_2,
- 'relu5_3': relu5_3,
- 'relu5_4': relu5_4,
- }
- return out
diff --git a/spaces/jiejiejie0420/bingo/src/components/ui/badge.tsx b/spaces/jiejiejie0420/bingo/src/components/ui/badge.tsx
deleted file mode 100644
index d9a84b394090e5b4b3bd34f6135b9a2f2ead0aa2..0000000000000000000000000000000000000000
--- a/spaces/jiejiejie0420/bingo/src/components/ui/badge.tsx
+++ /dev/null
@@ -1,36 +0,0 @@
-import * as React from 'react'
-import { cva, type VariantProps } from 'class-variance-authority'
-
-import { cn } from '@/lib/utils'
-
-const badgeVariants = cva(
- 'inline-flex items-center rounded-full border px-2.5 py-0.5 text-xs font-semibold transition-colors focus:outline-none focus:ring-2 focus:ring-ring focus:ring-offset-2',
- {
- variants: {
- variant: {
- default:
- 'border-transparent bg-primary text-primary-foreground hover:bg-primary/80',
- secondary:
- 'border-transparent bg-secondary text-secondary-foreground hover:bg-secondary/80',
- destructive:
- 'border-transparent bg-destructive text-destructive-foreground hover:bg-destructive/80',
- outline: 'text-foreground'
- }
- },
- defaultVariants: {
- variant: 'default'
- }
- }
-)
-
-export interface BadgeProps
- extends React.HTMLAttributes,
- VariantProps {}
-
-function Badge({ className, variant, ...props }: BadgeProps) {
- return (
-
- )
-}
-
-export { Badge, badgeVariants }
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/quic/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/quic/__init__.py
deleted file mode 100644
index 69813f9f18cc28eac706225187fb93c342aed95b..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/quic/__init__.py
+++ /dev/null
@@ -1,75 +0,0 @@
-# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
-
-try:
- import aioquic.quic.configuration # type: ignore
-
- import dns.asyncbackend
- from dns._asyncbackend import NullContext
- from dns.quic._asyncio import (
- AsyncioQuicConnection,
- AsyncioQuicManager,
- AsyncioQuicStream,
- )
- from dns.quic._common import AsyncQuicConnection, AsyncQuicManager
- from dns.quic._sync import SyncQuicConnection, SyncQuicManager, SyncQuicStream
-
- have_quic = True
-
- def null_factory(
- *args, # pylint: disable=unused-argument
- **kwargs # pylint: disable=unused-argument
- ):
- return NullContext(None)
-
- def _asyncio_manager_factory(
- context, *args, **kwargs # pylint: disable=unused-argument
- ):
- return AsyncioQuicManager(*args, **kwargs)
-
- # We have a context factory and a manager factory as for trio we need to have
- # a nursery.
-
- _async_factories = {"asyncio": (null_factory, _asyncio_manager_factory)}
-
- try:
- import trio
-
- from dns.quic._trio import ( # pylint: disable=ungrouped-imports
- TrioQuicConnection,
- TrioQuicManager,
- TrioQuicStream,
- )
-
- def _trio_context_factory():
- return trio.open_nursery()
-
- def _trio_manager_factory(context, *args, **kwargs):
- return TrioQuicManager(context, *args, **kwargs)
-
- _async_factories["trio"] = (_trio_context_factory, _trio_manager_factory)
- except ImportError:
- pass
-
- def factories_for_backend(backend=None):
- if backend is None:
- backend = dns.asyncbackend.get_default_backend()
- return _async_factories[backend.name()]
-
-except ImportError:
- have_quic = False
-
- from typing import Any
-
- class AsyncQuicStream: # type: ignore
- pass
-
- class AsyncQuicConnection: # type: ignore
- async def make_stream(self) -> Any:
- raise NotImplementedError
-
- class SyncQuicStream: # type: ignore
- pass
-
- class SyncQuicConnection: # type: ignore
- def make_stream(self) -> Any:
- raise NotImplementedError
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/SOA.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/SOA.py
deleted file mode 100644
index bde55e15fa53ccecc33f6fcabef589aef293d18f..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/SOA.py
+++ /dev/null
@@ -1,87 +0,0 @@
-# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license
-
-# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc.
-#
-# Permission to use, copy, modify, and distribute this software and its
-# documentation for any purpose with or without fee is hereby granted,
-# provided that the above copyright notice and this permission notice
-# appear in all copies.
-#
-# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES
-# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
-# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR
-# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
-# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
-# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
-# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
-
-import struct
-
-import dns.exception
-import dns.immutable
-import dns.name
-import dns.rdata
-
-
-@dns.immutable.immutable
-class SOA(dns.rdata.Rdata):
-
- """SOA record"""
-
- # see: RFC 1035
-
- __slots__ = ["mname", "rname", "serial", "refresh", "retry", "expire", "minimum"]
-
- def __init__(
- self, rdclass, rdtype, mname, rname, serial, refresh, retry, expire, minimum
- ):
- super().__init__(rdclass, rdtype)
- self.mname = self._as_name(mname)
- self.rname = self._as_name(rname)
- self.serial = self._as_uint32(serial)
- self.refresh = self._as_ttl(refresh)
- self.retry = self._as_ttl(retry)
- self.expire = self._as_ttl(expire)
- self.minimum = self._as_ttl(minimum)
-
- def to_text(self, origin=None, relativize=True, **kw):
- mname = self.mname.choose_relativity(origin, relativize)
- rname = self.rname.choose_relativity(origin, relativize)
- return "%s %s %d %d %d %d %d" % (
- mname,
- rname,
- self.serial,
- self.refresh,
- self.retry,
- self.expire,
- self.minimum,
- )
-
- @classmethod
- def from_text(
- cls, rdclass, rdtype, tok, origin=None, relativize=True, relativize_to=None
- ):
- mname = tok.get_name(origin, relativize, relativize_to)
- rname = tok.get_name(origin, relativize, relativize_to)
- serial = tok.get_uint32()
- refresh = tok.get_ttl()
- retry = tok.get_ttl()
- expire = tok.get_ttl()
- minimum = tok.get_ttl()
- return cls(
- rdclass, rdtype, mname, rname, serial, refresh, retry, expire, minimum
- )
-
- def _to_wire(self, file, compress=None, origin=None, canonicalize=False):
- self.mname.to_wire(file, compress, origin, canonicalize)
- self.rname.to_wire(file, compress, origin, canonicalize)
- five_ints = struct.pack(
- "!IIIII", self.serial, self.refresh, self.retry, self.expire, self.minimum
- )
- file.write(five_ints)
-
- @classmethod
- def from_wire_parser(cls, rdclass, rdtype, parser, origin=None):
- mname = parser.get_name(origin)
- rname = parser.get_name(origin)
- return cls(rdclass, rdtype, mname, rname, *parser.get_struct("!IIIII"))
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/M_A_T_H_.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/M_A_T_H_.py
deleted file mode 100644
index 011426b52a195bb2596116cc7bce0ad6e671eb23..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/M_A_T_H_.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from .otBase import BaseTTXConverter
-
-
-class table_M_A_T_H_(BaseTTXConverter):
- pass
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_D_.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_D_.py
deleted file mode 100644
index 536ff2f98a0abb8b27fe6da44199534a32fd0c3e..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fontTools/ttLib/tables/T_S_I_D_.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from .T_S_I_V_ import table_T_S_I_V_
-
-
-class table_T_S_I_D_(table_T_S_I_V_):
- pass
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/prompts/prompts.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/prompts/prompts.py
deleted file mode 100644
index f3f29216d17c202d8482ab107dc35924d0644588..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/prompts/prompts.py
+++ /dev/null
@@ -1,261 +0,0 @@
-"""Subclasses from base prompt."""
-from typing import List
-
-from gpt_index.prompts.base import Prompt
-from gpt_index.prompts.prompt_type import PromptType
-
-
-class SummaryPrompt(Prompt):
- """Summary prompt.
-
- Prompt to summarize the provided `context_str`.
-
- Required template variables: `context_str`
-
- Args:
- template (str): Template for the prompt.
- **prompt_kwargs: Keyword arguments for the prompt.
-
- """
-
- prompt_type: PromptType = PromptType.SUMMARY
- input_variables: List[str] = ["context_str"]
-
-
-class TreeInsertPrompt(Prompt):
- """Tree Insert prompt.
-
- Prompt to insert a new chunk of text `new_chunk_text` into the tree index.
- More specifically, this prompt has the LLM select the relevant candidate
- child node to continue tree traversal.
-
- Required template variables: `num_chunks`, `context_list`, `new_chunk_text`
-
- Args:
- template (str): Template for the prompt.
- **prompt_kwargs: Keyword arguments for the prompt.
-
- """
-
- prompt_type: PromptType = PromptType.TREE_INSERT
- input_variables: List[str] = ["num_chunks", "context_list", "new_chunk_text"]
-
-
-class TreeSelectPrompt(Prompt):
- """Tree select prompt.
-
- Prompt to select a candidate child node out of all child nodes
- provided in `context_list`, given a query `query_str`. `num_chunks` is
- the number of child nodes in `context_list`.
-
- Required template variables: `num_chunks`, `context_list`, `query_str`
-
- Args:
- template (str): Template for the prompt.
- **prompt_kwargs: Keyword arguments for the prompt.
-
- """
-
- prompt_type: PromptType = PromptType.TREE_SELECT
- input_variables: List[str] = ["num_chunks", "context_list", "query_str"]
-
-
-class TreeSelectMultiplePrompt(Prompt):
- """Tree select multiple prompt.
-
- Prompt to select multiple candidate child nodes out of all
- child nodes provided in `context_list`, given a query `query_str`.
- `branching_factor` refers to the number of child nodes to select, and
- `num_chunks` is the number of child nodes in `context_list`.
-
- Required template variables: `num_chunks`, `context_list`, `query_str`,
- `branching_factor`
-
- Args:
- template (str): Template for the prompt.
- **prompt_kwargs: Keyword arguments for the prompt.
-
- """
-
- prompt_type = PromptType.TREE_SELECT_MULTIPLE
- input_variables: List[str] = [
- "num_chunks",
- "context_list",
- "query_str",
- "branching_factor",
- ]
-
-
-class RefinePrompt(Prompt):
- """Refine prompt.
-
- Prompt to refine an existing answer `existing_answer` given a context `context_msg`,
- and a query `query_str`.
-
- Required template variables: `query_str`, `existing_answer`, `context_msg`
-
- Args:
- template (str): Template for the prompt.
- **prompt_kwargs: Keyword arguments for the prompt.
-
- """
-
- # TODO: rename context_msg to context_str
-
- prompt_type: PromptType = PromptType.REFINE
- input_variables: List[str] = ["query_str", "existing_answer", "context_msg"]
-
-
-class QuestionAnswerPrompt(Prompt):
- """Question Answer prompt.
-
- Prompt to answer a question `query_str` given a context `context_str`.
-
- Required template variables: `context_str`, `query_str`
-
- Args:
- template (str): Template for the prompt.
- **prompt_kwargs: Keyword arguments for the prompt.
-
- """
-
- prompt_type: PromptType = PromptType.QUESTION_ANSWER
- input_variables: List[str] = ["context_str", "query_str"]
-
-
-class KeywordExtractPrompt(Prompt):
- """Keyword extract prompt.
-
- Prompt to extract keywords from a text `text` with a maximum of
- `max_keywords` keywords.
-
- Required template variables: `text`, `max_keywords`
-
- Args:
- template (str): Template for the prompt.
- **prompt_kwargs: Keyword arguments for the prompt.
-
- """
-
- prompt_type: PromptType = PromptType.KEYWORD_EXTRACT
- input_variables: List[str] = ["text", "max_keywords"]
-
-
-class QueryKeywordExtractPrompt(Prompt):
- """Query keyword extract prompt.
-
- Prompt to extract keywords from a query `query_str` with a maximum
- of `max_keywords` keywords.
-
- Required template variables: `query_str`, `max_keywords`
-
- Args:
- template (str): Template for the prompt.
- **prompt_kwargs: Keyword arguments for the prompt.
-
- """
-
- prompt_type: PromptType = PromptType.QUERY_KEYWORD_EXTRACT
- input_variables: List[str] = ["question", "max_keywords"]
-
-
-class SchemaExtractPrompt(Prompt):
- """Schema extract prompt.
-
- Prompt to extract schema from unstructured text `text`.
-
- Required template variables: `text`, `schema`
-
- Args:
- template (str): Template for the prompt.
- **prompt_kwargs: Keyword arguments for the prompt.
-
- """
-
- prompt_type: PromptType = PromptType.SCHEMA_EXTRACT
- input_variables: List[str] = ["text", "schema"]
-
-
-class TextToSQLPrompt(Prompt):
- """Text to SQL prompt.
-
- Prompt to translate a natural language query into SQL,
- given a schema `schema`.
-
- Required template variables: `query_str`, `schema`
-
- Args:
- template (str): Template for the prompt.
- **prompt_kwargs: Keyword arguments for the prompt.
-
- """
-
- prompt_type: PromptType = PromptType.TEXT_TO_SQL
- input_variables: List[str] = ["query_str", "schema"]
-
-
-class TableContextPrompt(Prompt):
- """Table context prompt.
-
- Prompt to generate a table context given a table schema `schema`,
- as well as unstructured text context `context_str`, and
- a task `query_str`.
- This includes both a high-level description of the table
- as well as a description of each column in the table.
-
- Args:
- template (str): Template for the prompt.
- **prompt_kwargs: Keyword arguments for the prompt.
-
- """
-
- prompt_type: PromptType = PromptType.TABLE_CONTEXT
- input_variables: List[str] = ["schema", "context_str", "query_str"]
-
-
-class RefineTableContextPrompt(Prompt):
- """Refine Table context prompt.
-
- Prompt to refine a table context given a table schema `schema`,
- as well as unstructured text context `context_msg`, and
- a task `query_str`.
- This includes both a high-level description of the table
- as well as a description of each column in the table.
-
- Args:
- template (str): Template for the prompt.
- **prompt_kwargs: Keyword arguments for the prompt.
-
- """
-
- # TODO: rename context_msg to context_str
-
- prompt_type: PromptType = PromptType.TABLE_CONTEXT
- input_variables: List[str] = [
- "schema",
- "context_msg",
- "query_str",
- "existing_answer",
- ]
-
-
-class KnowledgeGraphPrompt(Prompt):
- """Define the knowledge graph triplet extraction prompt."""
-
- prompt_type: PromptType = PromptType.KNOWLEDGE_TRIPLET_EXTRACT
- input_variables: List[str] = ["max_knowledge_triplets", "text"]
-
-
-class SimpleInputPrompt(Prompt):
- """Simple Input prompt.
-
- Required template variables: `query_str`.
-
- Args:
- template (str): Template for the prompt.
- **prompt_kwargs: Keyword arguments for the prompt.
-
- """
-
- prompt_type: PromptType = PromptType.SIMPLE_INPUT
- input_variables: List[str] = ["query_str"]
diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/token_counter/token_counter.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/token_counter/token_counter.py
deleted file mode 100644
index ee4b9ac3b34abb65842a80d4ae2390733c1542c1..0000000000000000000000000000000000000000
--- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/gpt_index/token_counter/token_counter.py
+++ /dev/null
@@ -1,75 +0,0 @@
-"""Token counter function."""
-
-import logging
-from typing import Any, Callable, cast
-
-from gpt_index.embeddings.base import BaseEmbedding
-from gpt_index.langchain_helpers.chain_wrapper import LLMPredictor
-
-
-def llm_token_counter(method_name_str: str) -> Callable:
- """
- Use this as a decorator for methods in index/query classes that make calls to LLMs.
-
- At the moment, this decorator can only be used on class instance methods with a
- `_llm_predictor` attribute.
-
- Do not use this on abstract methods.
-
- For example, consider the class below:
- .. code-block:: python
- class GPTTreeIndexBuilder:
- ...
- @llm_token_counter("build_from_text")
- def build_from_text(self, documents: Sequence[BaseDocument]) -> IndexGraph:
- ...
-
- If you run `build_from_text()`, it will print the output in the form below:
-
- ```
- [build_from_text] Total token usage: tokens
- ```
- """
-
- def wrap(f: Callable) -> Callable:
- def wrapped_llm_predict(_self: Any, *args: Any, **kwargs: Any) -> Any:
- llm_predictor = getattr(_self, "_llm_predictor", None)
- if llm_predictor is None:
- raise ValueError(
- "Cannot use llm_token_counter on an instance "
- "without a _llm_predictor attribute."
- )
- llm_predictor = cast(LLMPredictor, llm_predictor)
-
- embed_model = getattr(_self, "_embed_model", None)
- if embed_model is None:
- raise ValueError(
- "Cannot use llm_token_counter on an instance "
- "without a _embed_model attribute."
- )
- embed_model = cast(BaseEmbedding, embed_model)
-
- start_token_ct = llm_predictor.total_tokens_used
- start_embed_token_ct = embed_model.total_tokens_used
-
- f_return_val = f(_self, *args, **kwargs)
-
- net_tokens = llm_predictor.total_tokens_used - start_token_ct
- llm_predictor.last_token_usage = net_tokens
- net_embed_tokens = embed_model.total_tokens_used - start_embed_token_ct
- embed_model.last_token_usage = net_embed_tokens
-
- # print outputs
- logging.info(
- f"> [{method_name_str}] Total LLM token usage: {net_tokens} tokens"
- )
- logging.info(
- f"> [{method_name_str}] Total embedding token usage: "
- f"{net_embed_tokens} tokens"
- )
-
- return f_return_val
-
- return wrapped_llm_predict
-
- return wrap
diff --git a/spaces/jonathanjordan21/ads-video-generator/components/pexels.py b/spaces/jonathanjordan21/ads-video-generator/components/pexels.py
deleted file mode 100644
index 447340e734bf098688e7fd7f22499e838b4f7de4..0000000000000000000000000000000000000000
--- a/spaces/jonathanjordan21/ads-video-generator/components/pexels.py
+++ /dev/null
@@ -1,81 +0,0 @@
-import requests
-import shutil,os,re
-
-# Searching for the videos
-def search_pexels(keyword, api_key, orientation='potrait', size='medium', endpoint='videos', num_pages=50):
-
- if orientation not in ['potrait', 'landscape', 'square']:
- raise Exception("Error! orientation must be one of {'square', 'landscape', 'potrait'}")
-
- if size not in ['medium', 'small', 'large']:
- raise Exception("Error! size must be one of ['medium', 'small', 'large']")
-
- base_url = 'https://api.pexels.com/'
-
- headers = {
- 'Authorization': f'{api_key}'
- }
-
- url = f'{base_url}{endpoint}/search?query={keyword}&per_page={num_pages}&orientation={orientation}&size={size}'
-
-
- response = requests.get(url, headers=headers)
-
- # Check if request was successful (status code 200)
- if response.status_code == 200:
- data = response.json()
- return data
- else:
- print(f'Error: {response.status_code}')
-
-
-# Video download function
-def download_video(data, parent_path, height, width, links, i):
- for x in data['videos'] :
- if x['id'] in links:
- continue
-
- vid = x['video_files']
- for v in vid:
- if v['height'] == height and v['width'] == width :
- with open(f"{os.path.join(parent_path,str(i) + '_' + str(v['id']))}.mp4", 'bw') as f:
- f.write(requests.get(v['link']).content)
- print("Sucessfully saved video in", os.path.join(parent_path,str(i) + '_' + str(v['id'])) + '.mp4')
- return x['id']
-
-
-# Utilizing the LLMs to find the relevant videos
-def generate_videos(product, api_key, orientation, height, width, llm_chain=None, sum_llm_chain=None):
- prod = product.strip().replace(" ", "_")
- links = []
- try :
- # Split the paragraph by sentences
-
- sentences = llm_chain.run(product.strip())
- print('Sentence :', sentences)
-
-# sentences = sentences.split(".")[:-1]
- sentences = [x.strip() for x in re.split(r'\d+\.', sentences) if len(x) > 6]
-
-
- # Create directory with the product's name
- if os.path.exists(prod):
- shutil.rmtree(prod)
- os.mkdir(prod)
-
- # Generate video for every sentence
- print("Keyword :")
- for i,s in enumerate(sentences):
- keyword = sum_llm_chain.run(s)
- print(i+1, ":", keyword)
- data = search_pexels(keyword, api_key, orientation.lower())
- link = download_video(data, prod, height, width, links,i)
- links.append(link)
-
- print("Success! videos has been generated")
- except Exception as e :
- print("Error! Failed generating videos")
- print(e)
-
- return prod, sentences
-
\ No newline at end of file
diff --git a/spaces/juancopi81/whisper-youtube-2-hf_dataset/dataset/__init__.py b/spaces/juancopi81/whisper-youtube-2-hf_dataset/dataset/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/justest/gpt4free/g4f/.v1/unfinished/t3nsor/README.md b/spaces/justest/gpt4free/g4f/.v1/unfinished/t3nsor/README.md
deleted file mode 100644
index 2790bf6e5fb5ab314395757168c26c956e0395fe..0000000000000000000000000000000000000000
--- a/spaces/justest/gpt4free/g4f/.v1/unfinished/t3nsor/README.md
+++ /dev/null
@@ -1,44 +0,0 @@
-### note: currently patched
-
-### Example: `t3nsor` (use like openai pypi package)
-
-```python
-# Import t3nsor
-import t3nsor
-
-# t3nsor.Completion.create
-# t3nsor.StreamCompletion.create
-
-[...]
-
-```
-
-#### Example Chatbot
-```python
-messages = []
-
-while True:
- user = input('you: ')
-
- t3nsor_cmpl = t3nsor.Completion.create(
- prompt = user,
- messages = messages
- )
-
- print('gpt:', t3nsor_cmpl.completion.choices[0].text)
-
- messages.extend([
- {'role': 'user', 'content': user },
- {'role': 'assistant', 'content': t3nsor_cmpl.completion.choices[0].text}
- ])
-```
-
-#### Streaming Response:
-
-```python
-for response in t3nsor.StreamCompletion.create(
- prompt = 'write python code to reverse a string',
- messages = []):
-
- print(response.completion.choices[0].text)
-```
diff --git a/spaces/k1ngtai/MMS/vits/models.py b/spaces/k1ngtai/MMS/vits/models.py
deleted file mode 100644
index f5acdeb2bedd47897348407c0ae55c9a160da881..0000000000000000000000000000000000000000
--- a/spaces/k1ngtai/MMS/vits/models.py
+++ /dev/null
@@ -1,534 +0,0 @@
-import copy
-import math
-import torch
-from torch import nn
-from torch.nn import functional as F
-
-import commons
-import modules
-import attentions
-import monotonic_align
-
-from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
-from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
-from commons import init_weights, get_padding
-
-
-class StochasticDurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0):
- super().__init__()
- filter_channels = in_channels # it needs to be removed from future version.
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.log_flow = modules.Log()
- self.flows = nn.ModuleList()
- self.flows.append(modules.ElementwiseAffine(2))
- for i in range(n_flows):
- self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.flows.append(modules.Flip())
-
- self.post_pre = nn.Conv1d(1, filter_channels, 1)
- self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- self.post_flows = nn.ModuleList()
- self.post_flows.append(modules.ElementwiseAffine(2))
- for i in range(4):
- self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3))
- self.post_flows.append(modules.Flip())
-
- self.pre = nn.Conv1d(in_channels, filter_channels, 1)
- self.proj = nn.Conv1d(filter_channels, filter_channels, 1)
- self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout)
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, filter_channels, 1)
-
- def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0):
- x = torch.detach(x)
- x = self.pre(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.convs(x, x_mask)
- x = self.proj(x) * x_mask
-
- if not reverse:
- flows = self.flows
- assert w is not None
-
- logdet_tot_q = 0
- h_w = self.post_pre(w)
- h_w = self.post_convs(h_w, x_mask)
- h_w = self.post_proj(h_w) * x_mask
- e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask
- z_q = e_q
- for flow in self.post_flows:
- z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w))
- logdet_tot_q += logdet_q
- z_u, z1 = torch.split(z_q, [1, 1], 1)
- u = torch.sigmoid(z_u) * x_mask
- z0 = (w - u) * x_mask
- logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2])
- logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q
-
- logdet_tot = 0
- z0, logdet = self.log_flow(z0, x_mask)
- logdet_tot += logdet
- z = torch.cat([z0, z1], 1)
- for flow in flows:
- z, logdet = flow(z, x_mask, g=x, reverse=reverse)
- logdet_tot = logdet_tot + logdet
- nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot
- return nll + logq # [b]
- else:
- flows = list(reversed(self.flows))
- flows = flows[:-2] + [flows[-1]] # remove a useless vflow
- z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale
- for flow in flows:
- z = flow(z, x_mask, g=x, reverse=reverse)
- z0, z1 = torch.split(z, [1, 1], 1)
- logw = z0
- return logw
-
-
-class DurationPredictor(nn.Module):
- def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0):
- super().__init__()
-
- self.in_channels = in_channels
- self.filter_channels = filter_channels
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.gin_channels = gin_channels
-
- self.drop = nn.Dropout(p_dropout)
- self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.norm_1 = modules.LayerNorm(filter_channels)
- self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2)
- self.norm_2 = modules.LayerNorm(filter_channels)
- self.proj = nn.Conv1d(filter_channels, 1, 1)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, in_channels, 1)
-
- def forward(self, x, x_mask, g=None):
- x = torch.detach(x)
- if g is not None:
- g = torch.detach(g)
- x = x + self.cond(g)
- x = self.conv_1(x * x_mask)
- x = torch.relu(x)
- x = self.norm_1(x)
- x = self.drop(x)
- x = self.conv_2(x * x_mask)
- x = torch.relu(x)
- x = self.norm_2(x)
- x = self.drop(x)
- x = self.proj(x * x_mask)
- return x * x_mask
-
-
-class TextEncoder(nn.Module):
- def __init__(self,
- n_vocab,
- out_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout):
- super().__init__()
- self.n_vocab = n_vocab
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
-
- self.emb = nn.Embedding(n_vocab, hidden_channels)
- nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5)
-
- self.encoder = attentions.Encoder(
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths):
- x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h]
- x = torch.transpose(x, 1, -1) # [b, h, t]
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
-
- x = self.encoder(x * x_mask, x_mask)
- stats = self.proj(x) * x_mask
-
- m, logs = torch.split(stats, self.out_channels, dim=1)
- return x, m, logs, x_mask
-
-
-class ResidualCouplingBlock(nn.Module):
- def __init__(self,
- channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- n_flows=4,
- gin_channels=0):
- super().__init__()
- self.channels = channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.n_flows = n_flows
- self.gin_channels = gin_channels
-
- self.flows = nn.ModuleList()
- for i in range(n_flows):
- self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True))
- self.flows.append(modules.Flip())
-
- def forward(self, x, x_mask, g=None, reverse=False):
- if not reverse:
- for flow in self.flows:
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
- else:
- for flow in reversed(self.flows):
- x = flow(x, x_mask, g=g, reverse=reverse)
- return x
-
-
-class PosteriorEncoder(nn.Module):
- def __init__(self,
- in_channels,
- out_channels,
- hidden_channels,
- kernel_size,
- dilation_rate,
- n_layers,
- gin_channels=0):
- super().__init__()
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.hidden_channels = hidden_channels
- self.kernel_size = kernel_size
- self.dilation_rate = dilation_rate
- self.n_layers = n_layers
- self.gin_channels = gin_channels
-
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
- self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
-
- def forward(self, x, x_lengths, g=None):
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
- x = self.pre(x) * x_mask
- x = self.enc(x, x_mask, g=g)
- stats = self.proj(x) * x_mask
- m, logs = torch.split(stats, self.out_channels, dim=1)
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
- return z, m, logs, x_mask
-
-
-class Generator(torch.nn.Module):
- def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0):
- super(Generator, self).__init__()
- self.num_kernels = len(resblock_kernel_sizes)
- self.num_upsamples = len(upsample_rates)
- self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)
- resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2
-
- self.ups = nn.ModuleList()
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
- self.ups.append(weight_norm(
- ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)),
- k, u, padding=(k-u)//2)))
-
- self.resblocks = nn.ModuleList()
- for i in range(len(self.ups)):
- ch = upsample_initial_channel//(2**(i+1))
- for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
- self.resblocks.append(resblock(ch, k, d))
-
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
- self.ups.apply(init_weights)
-
- if gin_channels != 0:
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
-
- def forward(self, x, g=None):
- x = self.conv_pre(x)
- if g is not None:
- x = x + self.cond(g)
-
- for i in range(self.num_upsamples):
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- x = self.ups[i](x)
- xs = None
- for j in range(self.num_kernels):
- if xs is None:
- xs = self.resblocks[i*self.num_kernels+j](x)
- else:
- xs += self.resblocks[i*self.num_kernels+j](x)
- x = xs / self.num_kernels
- x = F.leaky_relu(x)
- x = self.conv_post(x)
- x = torch.tanh(x)
-
- return x
-
- def remove_weight_norm(self):
- print('Removing weight norm...')
- for l in self.ups:
- remove_weight_norm(l)
- for l in self.resblocks:
- l.remove_weight_norm()
-
-
-class DiscriminatorP(torch.nn.Module):
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
- super(DiscriminatorP, self).__init__()
- self.period = period
- self.use_spectral_norm = use_spectral_norm
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))),
- norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))),
- ])
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
-
- def forward(self, x):
- fmap = []
-
- # 1d to 2d
- b, c, t = x.shape
- if t % self.period != 0: # pad first
- n_pad = self.period - (t % self.period)
- x = F.pad(x, (0, n_pad), "reflect")
- t = t + n_pad
- x = x.view(b, c, t // self.period, self.period)
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class DiscriminatorS(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(DiscriminatorS, self).__init__()
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
- self.convs = nn.ModuleList([
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
- ])
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
-
- def forward(self, x):
- fmap = []
-
- for l in self.convs:
- x = l(x)
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- x = torch.flatten(x, 1, -1)
-
- return x, fmap
-
-
-class MultiPeriodDiscriminator(torch.nn.Module):
- def __init__(self, use_spectral_norm=False):
- super(MultiPeriodDiscriminator, self).__init__()
- periods = [2,3,5,7,11]
-
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
- discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods]
- self.discriminators = nn.ModuleList(discs)
-
- def forward(self, y, y_hat):
- y_d_rs = []
- y_d_gs = []
- fmap_rs = []
- fmap_gs = []
- for i, d in enumerate(self.discriminators):
- y_d_r, fmap_r = d(y)
- y_d_g, fmap_g = d(y_hat)
- y_d_rs.append(y_d_r)
- y_d_gs.append(y_d_g)
- fmap_rs.append(fmap_r)
- fmap_gs.append(fmap_g)
-
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
-
-
-
-class SynthesizerTrn(nn.Module):
- """
- Synthesizer for Training
- """
-
- def __init__(self,
- n_vocab,
- spec_channels,
- segment_size,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout,
- resblock,
- resblock_kernel_sizes,
- resblock_dilation_sizes,
- upsample_rates,
- upsample_initial_channel,
- upsample_kernel_sizes,
- n_speakers=0,
- gin_channels=0,
- use_sdp=True,
- **kwargs):
-
- super().__init__()
- self.n_vocab = n_vocab
- self.spec_channels = spec_channels
- self.inter_channels = inter_channels
- self.hidden_channels = hidden_channels
- self.filter_channels = filter_channels
- self.n_heads = n_heads
- self.n_layers = n_layers
- self.kernel_size = kernel_size
- self.p_dropout = p_dropout
- self.resblock = resblock
- self.resblock_kernel_sizes = resblock_kernel_sizes
- self.resblock_dilation_sizes = resblock_dilation_sizes
- self.upsample_rates = upsample_rates
- self.upsample_initial_channel = upsample_initial_channel
- self.upsample_kernel_sizes = upsample_kernel_sizes
- self.segment_size = segment_size
- self.n_speakers = n_speakers
- self.gin_channels = gin_channels
-
- self.use_sdp = use_sdp
-
- self.enc_p = TextEncoder(n_vocab,
- inter_channels,
- hidden_channels,
- filter_channels,
- n_heads,
- n_layers,
- kernel_size,
- p_dropout)
- self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels)
- self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels)
- self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels)
-
- if use_sdp:
- self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels)
- else:
- self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels)
-
- if n_speakers > 1:
- self.emb_g = nn.Embedding(n_speakers, gin_channels)
-
- def forward(self, x, x_lengths, y, y_lengths, sid=None):
-
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = None
-
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
- z_p = self.flow(z, y_mask, g=g)
-
- with torch.no_grad():
- # negative cross-entropy
- s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t]
- neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s]
- neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s]
- neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s]
- neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4
-
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach()
-
- w = attn.sum(2)
- if self.use_sdp:
- l_length = self.dp(x, x_mask, w, g=g)
- l_length = l_length / torch.sum(x_mask)
- else:
- logw_ = torch.log(w + 1e-6) * x_mask
- logw = self.dp(x, x_mask, g=g)
- l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging
-
- # expand prior
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2)
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2)
-
- z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size)
- o = self.dec(z_slice, g=g)
- return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
-
- def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None):
- x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths)
- if self.n_speakers > 0:
- g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1]
- else:
- g = None
-
- if self.use_sdp:
- logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w)
- else:
- logw = self.dp(x, x_mask, g=g)
- w = torch.exp(logw) * x_mask * length_scale
- w_ceil = torch.ceil(w)
- y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long()
- y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype)
- attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1)
- attn = commons.generate_path(w_ceil, attn_mask)
-
- m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
- logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t']
-
- z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale
- z = self.flow(z_p, y_mask, g=g, reverse=True)
- o = self.dec((z * y_mask)[:,:,:max_len], g=g)
- return o, attn, y_mask, (z, z_p, m_p, logs_p)
-
- def voice_conversion(self, y, y_lengths, sid_src, sid_tgt):
- assert self.n_speakers > 0, "n_speakers have to be larger than 0."
- g_src = self.emb_g(sid_src).unsqueeze(-1)
- g_tgt = self.emb_g(sid_tgt).unsqueeze(-1)
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src)
- z_p = self.flow(z, y_mask, g=g_src)
- z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True)
- o_hat = self.dec(z_hat * y_mask, g=g_tgt)
- return o_hat, y_mask, (z, z_p, z_hat)
-
diff --git a/spaces/kadirnar/Tune-A-Video/app.py b/spaces/kadirnar/Tune-A-Video/app.py
deleted file mode 100644
index f757d0bbd3a5e1c038f05b62148fdc4836fb071c..0000000000000000000000000000000000000000
--- a/spaces/kadirnar/Tune-A-Video/app.py
+++ /dev/null
@@ -1,105 +0,0 @@
-from tuneavideo.pipelines.pipeline_tuneavideo import TuneAVideoPipeline
-from tuneavideo.models.unet import UNet3DConditionModel
-from tuneavideo.util import save_videos_grid
-import torch
-import gradio as gr
-
-model_list = [
- "runwayml/stable-diffusion-v1-5",
- "CompVis/stable-diffusion-v1-4",
- "prompthero/openjourney",
- "dreamlike-art/dreamlike-photoreal-2.0",
- "dreamlike-art/dreamlike-diffusion-1.0"
-]
-
-def tune_video_predict(
- pipe_id: str,
- prompt: str,
- video_length: int,
- height: int,
- width: int,
- num_inference_steps: int,
- guidance_scale: float,
-):
- unet = UNet3DConditionModel.from_pretrained("Tune-A-Video-library/a-man-is-surfing", subfolder='unet', torch_dtype=torch.float16).to('cuda')
- pipe = TuneAVideoPipeline.from_pretrained(pipe_id, unet=unet, torch_dtype=torch.float16).to("cuda")
- video = pipe(prompt, video_length=video_length, height=height, width=width, num_inference_steps=num_inference_steps, guidance_scale=guidance_scale).videos
- output_path = save_videos_grid(video, save_path='output', path=f"{prompt}.gif")
- return output_path
-
-
-
-demo_inputs = [
- gr.Dropdown(
- label="Model",
- choices=model_list,
- value="CompVis/stable-diffusion-v1-4",
- ),
- gr.Textbox(
- label="Prompt",
- value='a flower blooming'
-
- ),
- gr.Slider(
- label="Video Length",
- minimum=1,
- maximum=50,
- value=8,
- step=1,
- ),
- gr.Slider(
- label="Height",
- minimum=128,
- maximum=1280,
- value=416,
- step=32,
-
- ),
- gr.Slider(
- label="Width",
- minimum=128,
- maximum=1280,
- value=416,
- step=32,
- ),
- gr.Slider(
- label="Num Inference Steps",
- minimum=1,
- maximum=100,
- value=50,
- step=1,
- ),
- gr.Slider(
- label="Guidance Scale",
- minimum=0.0,
- maximum=100,
- value=7.5,
- step=0.5,
- )
-]
-
-demo_outputs = gr.outputs.Video(type="gif", label="Output")
-
-examples = [
- ["CompVis/stable-diffusion-v1-4", "a panda is surfing", 5, 416, 416, 50, 7.5],
- ["sd-dreambooth-library/disco-diffusion-style", "ddfusion style on the church", 5, 416, 416, 50, 7.5],
- #["sd-dreambooth-library/nasa-space-v2-768", "nasa style galaxy moving", 5, 416, 416, 50, 7.5],
- ["sd-dreambooth-library/mr-potato-head", "sks mr potato head, wearing a pink hat, is surfing.", 5, 416, 416, 50, 7.5],
- ["sd-dreambooth-library/mr-potato-head", "sks mr potato head is surfing in the forest.", 5, 416, 416, 50, 7.5],
-]
-
-description = "This is an application that generates video based on a text prompt. To get started, simply input text. The default model in the dropdown is a generic model that you can generate anything. Alternatively, for more photorealistic generations, you can use other models in the dropdown. These models are Dreambooth models, and they're trained with a specific object name, so make sure you know what the object is called. You can find an example prompt for a dreambooth model in Examples section right below the interface."
-title = "Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation"
-
-demo_app = gr.Interface(
- fn=tune_video_predict,
- inputs=demo_inputs,
- outputs=demo_outputs,
- examples=examples,
- cache_examples=False,
- title=title,
- theme="huggingface",
- description=description
-)
-
-demo_app.launch(debug=True, enable_queue=True)
diff --git a/spaces/kevinwang676/Bert-VITS2/text/__init__.py b/spaces/kevinwang676/Bert-VITS2/text/__init__.py
deleted file mode 100644
index 7566bf351ca9b95af9cdc6d729557a9da083800f..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/Bert-VITS2/text/__init__.py
+++ /dev/null
@@ -1,28 +0,0 @@
-from text.symbols import *
-
-
-_symbol_to_id = {s: i for i, s in enumerate(symbols)}
-
-def cleaned_text_to_sequence(cleaned_text, tones, language):
- '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text.
- Args:
- text: string to convert to a sequence
- Returns:
- List of integers corresponding to the symbols in the text
- '''
- phones = [_symbol_to_id[symbol] for symbol in cleaned_text]
- tone_start = language_tone_start_map[language]
- tones = [i + tone_start for i in tones]
- lang_id = language_id_map[language]
- lang_ids = [lang_id for i in phones]
- return phones, tones, lang_ids
-
-def get_bert(norm_text, word2ph, language):
- from .chinese_bert import get_bert_feature as zh_bert
- from .english_bert_mock import get_bert_feature as en_bert
- lang_bert_func_map = {
- 'ZH': zh_bert,
- 'EN': en_bert
- }
- bert = lang_bert_func_map[language](norm_text, word2ph)
- return bert
diff --git a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/util/skin_mask.py b/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/util/skin_mask.py
deleted file mode 100644
index a8a74e4c3b40d13b0258b83a12f56321a85bb179..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/ChatGLM2-SadTalker/src/face3d/util/skin_mask.py
+++ /dev/null
@@ -1,125 +0,0 @@
-"""This script is to generate skin attention mask for Deep3DFaceRecon_pytorch
-"""
-
-import math
-import numpy as np
-import os
-import cv2
-
-class GMM:
- def __init__(self, dim, num, w, mu, cov, cov_det, cov_inv):
- self.dim = dim # feature dimension
- self.num = num # number of Gaussian components
- self.w = w # weights of Gaussian components (a list of scalars)
- self.mu= mu # mean of Gaussian components (a list of 1xdim vectors)
- self.cov = cov # covariance matrix of Gaussian components (a list of dimxdim matrices)
- self.cov_det = cov_det # pre-computed determinet of covariance matrices (a list of scalars)
- self.cov_inv = cov_inv # pre-computed inverse covariance matrices (a list of dimxdim matrices)
-
- self.factor = [0]*num
- for i in range(self.num):
- self.factor[i] = (2*math.pi)**(self.dim/2) * self.cov_det[i]**0.5
-
- def likelihood(self, data):
- assert(data.shape[1] == self.dim)
- N = data.shape[0]
- lh = np.zeros(N)
-
- for i in range(self.num):
- data_ = data - self.mu[i]
-
- tmp = np.matmul(data_,self.cov_inv[i]) * data_
- tmp = np.sum(tmp,axis=1)
- power = -0.5 * tmp
-
- p = np.array([math.exp(power[j]) for j in range(N)])
- p = p/self.factor[i]
- lh += p*self.w[i]
-
- return lh
-
-
-def _rgb2ycbcr(rgb):
- m = np.array([[65.481, 128.553, 24.966],
- [-37.797, -74.203, 112],
- [112, -93.786, -18.214]])
- shape = rgb.shape
- rgb = rgb.reshape((shape[0] * shape[1], 3))
- ycbcr = np.dot(rgb, m.transpose() / 255.)
- ycbcr[:, 0] += 16.
- ycbcr[:, 1:] += 128.
- return ycbcr.reshape(shape)
-
-
-def _bgr2ycbcr(bgr):
- rgb = bgr[..., ::-1]
- return _rgb2ycbcr(rgb)
-
-
-gmm_skin_w = [0.24063933, 0.16365987, 0.26034665, 0.33535415]
-gmm_skin_mu = [np.array([113.71862, 103.39613, 164.08226]),
- np.array([150.19858, 105.18467, 155.51428]),
- np.array([183.92976, 107.62468, 152.71820]),
- np.array([114.90524, 113.59782, 151.38217])]
-gmm_skin_cov_det = [5692842.5, 5851930.5, 2329131., 1585971.]
-gmm_skin_cov_inv = [np.array([[0.0019472069, 0.0020450759, -0.00060243998],[0.0020450759, 0.017700525, 0.0051420014],[-0.00060243998, 0.0051420014, 0.0081308950]]),
- np.array([[0.0027110141, 0.0011036990, 0.0023122299],[0.0011036990, 0.010707724, 0.010742856],[0.0023122299, 0.010742856, 0.017481629]]),
- np.array([[0.0048026871, 0.00022935172, 0.0077668377],[0.00022935172, 0.011729696, 0.0081661865],[0.0077668377, 0.0081661865, 0.025374353]]),
- np.array([[0.0011989699, 0.0022453172, -0.0010748957],[0.0022453172, 0.047758564, 0.020332102],[-0.0010748957, 0.020332102, 0.024502251]])]
-
-gmm_skin = GMM(3, 4, gmm_skin_w, gmm_skin_mu, [], gmm_skin_cov_det, gmm_skin_cov_inv)
-
-gmm_nonskin_w = [0.12791070, 0.31130761, 0.34245777, 0.21832393]
-gmm_nonskin_mu = [np.array([99.200851, 112.07533, 140.20602]),
- np.array([110.91392, 125.52969, 130.19237]),
- np.array([129.75864, 129.96107, 126.96808]),
- np.array([112.29587, 128.85121, 129.05431])]
-gmm_nonskin_cov_det = [458703648., 6466488., 90611376., 133097.63]
-gmm_nonskin_cov_inv = [np.array([[0.00085371657, 0.00071197288, 0.00023958916],[0.00071197288, 0.0025935620, 0.00076557708],[0.00023958916, 0.00076557708, 0.0015042332]]),
- np.array([[0.00024650150, 0.00045542428, 0.00015019422],[0.00045542428, 0.026412144, 0.018419769],[0.00015019422, 0.018419769, 0.037497383]]),
- np.array([[0.00037054974, 0.00038146760, 0.00040408765],[0.00038146760, 0.0085505722, 0.0079136286],[0.00040408765, 0.0079136286, 0.010982352]]),
- np.array([[0.00013709733, 0.00051228428, 0.00012777430],[0.00051228428, 0.28237113, 0.10528370],[0.00012777430, 0.10528370, 0.23468947]])]
-
-gmm_nonskin = GMM(3, 4, gmm_nonskin_w, gmm_nonskin_mu, [], gmm_nonskin_cov_det, gmm_nonskin_cov_inv)
-
-prior_skin = 0.8
-prior_nonskin = 1 - prior_skin
-
-
-# calculate skin attention mask
-def skinmask(imbgr):
- im = _bgr2ycbcr(imbgr)
-
- data = im.reshape((-1,3))
-
- lh_skin = gmm_skin.likelihood(data)
- lh_nonskin = gmm_nonskin.likelihood(data)
-
- tmp1 = prior_skin * lh_skin
- tmp2 = prior_nonskin * lh_nonskin
- post_skin = tmp1 / (tmp1+tmp2) # posterior probability
-
- post_skin = post_skin.reshape((im.shape[0],im.shape[1]))
-
- post_skin = np.round(post_skin*255)
- post_skin = post_skin.astype(np.uint8)
- post_skin = np.tile(np.expand_dims(post_skin,2),[1,1,3]) # reshape to H*W*3
-
- return post_skin
-
-
-def get_skin_mask(img_path):
- print('generating skin masks......')
- names = [i for i in sorted(os.listdir(
- img_path)) if 'jpg' in i or 'png' in i or 'jpeg' in i or 'PNG' in i]
- save_path = os.path.join(img_path, 'mask')
- if not os.path.isdir(save_path):
- os.makedirs(save_path)
-
- for i in range(0, len(names)):
- name = names[i]
- print('%05d' % (i), ' ', name)
- full_image_name = os.path.join(img_path, name)
- img = cv2.imread(full_image_name).astype(np.float32)
- skin_img = skinmask(img)
- cv2.imwrite(os.path.join(save_path, name), skin_img.astype(np.uint8))
diff --git a/spaces/kevinwang676/SadTalker/src/face3d/models/template_model.py b/spaces/kevinwang676/SadTalker/src/face3d/models/template_model.py
deleted file mode 100644
index dac7b33d5889777eb63c9882a3b9fa094dcab293..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/SadTalker/src/face3d/models/template_model.py
+++ /dev/null
@@ -1,100 +0,0 @@
-"""Model class template
-
-This module provides a template for users to implement custom models.
-You can specify '--model template' to use this model.
-The class name should be consistent with both the filename and its model option.
-The filename should be _dataset.py
-The class name should be Dataset.py
-It implements a simple image-to-image translation baseline based on regression loss.
-Given input-output pairs (data_A, data_B), it learns a network netG that can minimize the following L1 loss:
- min_ ||netG(data_A) - data_B||_1
-You need to implement the following functions:
- : Add model-specific options and rewrite default values for existing options.
- <__init__>: Initialize this model class.
- : Unpack input data and perform data pre-processing.
- : Run forward pass. This will be called by both and .
- : Update network weights; it will be called in every training iteration.
-"""
-import numpy as np
-import torch
-from .base_model import BaseModel
-from . import networks
-
-
-class TemplateModel(BaseModel):
- @staticmethod
- def modify_commandline_options(parser, is_train=True):
- """Add new model-specific options and rewrite default values for existing options.
-
- Parameters:
- parser -- the option parser
- is_train -- if it is training phase or test phase. You can use this flag to add training-specific or test-specific options.
-
- Returns:
- the modified parser.
- """
- parser.set_defaults(dataset_mode='aligned') # You can rewrite default values for this model. For example, this model usually uses aligned dataset as its dataset.
- if is_train:
- parser.add_argument('--lambda_regression', type=float, default=1.0, help='weight for the regression loss') # You can define new arguments for this model.
-
- return parser
-
- def __init__(self, opt):
- """Initialize this model class.
-
- Parameters:
- opt -- training/test options
-
- A few things can be done here.
- - (required) call the initialization function of BaseModel
- - define loss function, visualization images, model names, and optimizers
- """
- BaseModel.__init__(self, opt) # call the initialization method of BaseModel
- # specify the training losses you want to print out. The program will call base_model.get_current_losses to plot the losses to the console and save them to the disk.
- self.loss_names = ['loss_G']
- # specify the images you want to save and display. The program will call base_model.get_current_visuals to save and display these images.
- self.visual_names = ['data_A', 'data_B', 'output']
- # specify the models you want to save to the disk. The program will call base_model.save_networks and base_model.load_networks to save and load networks.
- # you can use opt.isTrain to specify different behaviors for training and test. For example, some networks will not be used during test, and you don't need to load them.
- self.model_names = ['G']
- # define networks; you can use opt.isTrain to specify different behaviors for training and test.
- self.netG = networks.define_G(opt.input_nc, opt.output_nc, opt.ngf, opt.netG, gpu_ids=self.gpu_ids)
- if self.isTrain: # only defined during training time
- # define your loss functions. You can use losses provided by torch.nn such as torch.nn.L1Loss.
- # We also provide a GANLoss class "networks.GANLoss". self.criterionGAN = networks.GANLoss().to(self.device)
- self.criterionLoss = torch.nn.L1Loss()
- # define and initialize optimizers. You can define one optimizer for each network.
- # If two networks are updated at the same time, you can use itertools.chain to group them. See cycle_gan_model.py for an example.
- self.optimizer = torch.optim.Adam(self.netG.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999))
- self.optimizers = [self.optimizer]
-
- # Our program will automatically call to define schedulers, load networks, and print networks
-
- def set_input(self, input):
- """Unpack input data from the dataloader and perform necessary pre-processing steps.
-
- Parameters:
- input: a dictionary that contains the data itself and its metadata information.
- """
- AtoB = self.opt.direction == 'AtoB' # use to swap data_A and data_B
- self.data_A = input['A' if AtoB else 'B'].to(self.device) # get image data A
- self.data_B = input['B' if AtoB else 'A'].to(self.device) # get image data B
- self.image_paths = input['A_paths' if AtoB else 'B_paths'] # get image paths
-
- def forward(self):
- """Run forward pass. This will be called by both functions and ."""
- self.output = self.netG(self.data_A) # generate output image given the input data_A
-
- def backward(self):
- """Calculate losses, gradients, and update network weights; called in every training iteration"""
- # caculate the intermediate results if necessary; here self.output has been computed during function
- # calculate loss given the input and intermediate results
- self.loss_G = self.criterionLoss(self.output, self.data_B) * self.opt.lambda_regression
- self.loss_G.backward() # calculate gradients of network G w.r.t. loss_G
-
- def optimize_parameters(self):
- """Update network weights; it will be called in every training iteration."""
- self.forward() # first call forward to calculate intermediate results
- self.optimizer.zero_grad() # clear network G's existing gradients
- self.backward() # calculate gradients for network G
- self.optimizer.step() # update gradients for network G
diff --git a/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/onnx_ijbc.py b/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/onnx_ijbc.py
deleted file mode 100644
index 05b50bfad4b4cf38903b89f596263a8e29a50d3e..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/onnx_ijbc.py
+++ /dev/null
@@ -1,267 +0,0 @@
-import argparse
-import os
-import pickle
-import timeit
-
-import cv2
-import mxnet as mx
-import numpy as np
-import pandas as pd
-import prettytable
-import skimage.transform
-from sklearn.metrics import roc_curve
-from sklearn.preprocessing import normalize
-
-from onnx_helper import ArcFaceORT
-
-SRC = np.array(
- [
- [30.2946, 51.6963],
- [65.5318, 51.5014],
- [48.0252, 71.7366],
- [33.5493, 92.3655],
- [62.7299, 92.2041]]
- , dtype=np.float32)
-SRC[:, 0] += 8.0
-
-
-class AlignedDataSet(mx.gluon.data.Dataset):
- def __init__(self, root, lines, align=True):
- self.lines = lines
- self.root = root
- self.align = align
-
- def __len__(self):
- return len(self.lines)
-
- def __getitem__(self, idx):
- each_line = self.lines[idx]
- name_lmk_score = each_line.strip().split(' ')
- name = os.path.join(self.root, name_lmk_score[0])
- img = cv2.cvtColor(cv2.imread(name), cv2.COLOR_BGR2RGB)
- landmark5 = np.array([float(x) for x in name_lmk_score[1:-1]], dtype=np.float32).reshape((5, 2))
- st = skimage.transform.SimilarityTransform()
- st.estimate(landmark5, SRC)
- img = cv2.warpAffine(img, st.params[0:2, :], (112, 112), borderValue=0.0)
- img_1 = np.expand_dims(img, 0)
- img_2 = np.expand_dims(np.fliplr(img), 0)
- output = np.concatenate((img_1, img_2), axis=0).astype(np.float32)
- output = np.transpose(output, (0, 3, 1, 2))
- output = mx.nd.array(output)
- return output
-
-
-def extract(model_root, dataset):
- model = ArcFaceORT(model_path=model_root)
- model.check()
- feat_mat = np.zeros(shape=(len(dataset), 2 * model.feat_dim))
-
- def batchify_fn(data):
- return mx.nd.concat(*data, dim=0)
-
- data_loader = mx.gluon.data.DataLoader(
- dataset, 128, last_batch='keep', num_workers=4,
- thread_pool=True, prefetch=16, batchify_fn=batchify_fn)
- num_iter = 0
- for batch in data_loader:
- batch = batch.asnumpy()
- batch = (batch - model.input_mean) / model.input_std
- feat = model.session.run(model.output_names, {model.input_name: batch})[0]
- feat = np.reshape(feat, (-1, model.feat_dim * 2))
- feat_mat[128 * num_iter: 128 * num_iter + feat.shape[0], :] = feat
- num_iter += 1
- if num_iter % 50 == 0:
- print(num_iter)
- return feat_mat
-
-
-def read_template_media_list(path):
- ijb_meta = pd.read_csv(path, sep=' ', header=None).values
- templates = ijb_meta[:, 1].astype(np.int)
- medias = ijb_meta[:, 2].astype(np.int)
- return templates, medias
-
-
-def read_template_pair_list(path):
- pairs = pd.read_csv(path, sep=' ', header=None).values
- t1 = pairs[:, 0].astype(np.int)
- t2 = pairs[:, 1].astype(np.int)
- label = pairs[:, 2].astype(np.int)
- return t1, t2, label
-
-
-def read_image_feature(path):
- with open(path, 'rb') as fid:
- img_feats = pickle.load(fid)
- return img_feats
-
-
-def image2template_feature(img_feats=None,
- templates=None,
- medias=None):
- unique_templates = np.unique(templates)
- template_feats = np.zeros((len(unique_templates), img_feats.shape[1]))
- for count_template, uqt in enumerate(unique_templates):
- (ind_t,) = np.where(templates == uqt)
- face_norm_feats = img_feats[ind_t]
- face_medias = medias[ind_t]
- unique_medias, unique_media_counts = np.unique(face_medias, return_counts=True)
- media_norm_feats = []
- for u, ct in zip(unique_medias, unique_media_counts):
- (ind_m,) = np.where(face_medias == u)
- if ct == 1:
- media_norm_feats += [face_norm_feats[ind_m]]
- else: # image features from the same video will be aggregated into one feature
- media_norm_feats += [np.mean(face_norm_feats[ind_m], axis=0, keepdims=True), ]
- media_norm_feats = np.array(media_norm_feats)
- template_feats[count_template] = np.sum(media_norm_feats, axis=0)
- if count_template % 2000 == 0:
- print('Finish Calculating {} template features.'.format(
- count_template))
- template_norm_feats = normalize(template_feats)
- return template_norm_feats, unique_templates
-
-
-def verification(template_norm_feats=None,
- unique_templates=None,
- p1=None,
- p2=None):
- template2id = np.zeros((max(unique_templates) + 1, 1), dtype=int)
- for count_template, uqt in enumerate(unique_templates):
- template2id[uqt] = count_template
- score = np.zeros((len(p1),))
- total_pairs = np.array(range(len(p1)))
- batchsize = 100000
- sublists = [total_pairs[i: i + batchsize] for i in range(0, len(p1), batchsize)]
- total_sublists = len(sublists)
- for c, s in enumerate(sublists):
- feat1 = template_norm_feats[template2id[p1[s]]]
- feat2 = template_norm_feats[template2id[p2[s]]]
- similarity_score = np.sum(feat1 * feat2, -1)
- score[s] = similarity_score.flatten()
- if c % 10 == 0:
- print('Finish {}/{} pairs.'.format(c, total_sublists))
- return score
-
-
-def verification2(template_norm_feats=None,
- unique_templates=None,
- p1=None,
- p2=None):
- template2id = np.zeros((max(unique_templates) + 1, 1), dtype=int)
- for count_template, uqt in enumerate(unique_templates):
- template2id[uqt] = count_template
- score = np.zeros((len(p1),)) # save cosine distance between pairs
- total_pairs = np.array(range(len(p1)))
- batchsize = 100000 # small batchsize instead of all pairs in one batch due to the memory limiation
- sublists = [total_pairs[i:i + batchsize] for i in range(0, len(p1), batchsize)]
- total_sublists = len(sublists)
- for c, s in enumerate(sublists):
- feat1 = template_norm_feats[template2id[p1[s]]]
- feat2 = template_norm_feats[template2id[p2[s]]]
- similarity_score = np.sum(feat1 * feat2, -1)
- score[s] = similarity_score.flatten()
- if c % 10 == 0:
- print('Finish {}/{} pairs.'.format(c, total_sublists))
- return score
-
-
-def main(args):
- use_norm_score = True # if Ture, TestMode(N1)
- use_detector_score = True # if Ture, TestMode(D1)
- use_flip_test = True # if Ture, TestMode(F1)
- assert args.target == 'IJBC' or args.target == 'IJBB'
-
- start = timeit.default_timer()
- templates, medias = read_template_media_list(
- os.path.join('%s/meta' % args.image_path, '%s_face_tid_mid.txt' % args.target.lower()))
- stop = timeit.default_timer()
- print('Time: %.2f s. ' % (stop - start))
-
- start = timeit.default_timer()
- p1, p2, label = read_template_pair_list(
- os.path.join('%s/meta' % args.image_path,
- '%s_template_pair_label.txt' % args.target.lower()))
- stop = timeit.default_timer()
- print('Time: %.2f s. ' % (stop - start))
-
- start = timeit.default_timer()
- img_path = '%s/loose_crop' % args.image_path
- img_list_path = '%s/meta/%s_name_5pts_score.txt' % (args.image_path, args.target.lower())
- img_list = open(img_list_path)
- files = img_list.readlines()
- dataset = AlignedDataSet(root=img_path, lines=files, align=True)
- img_feats = extract(args.model_root, dataset)
-
- faceness_scores = []
- for each_line in files:
- name_lmk_score = each_line.split()
- faceness_scores.append(name_lmk_score[-1])
- faceness_scores = np.array(faceness_scores).astype(np.float32)
- stop = timeit.default_timer()
- print('Time: %.2f s. ' % (stop - start))
- print('Feature Shape: ({} , {}) .'.format(img_feats.shape[0], img_feats.shape[1]))
- start = timeit.default_timer()
-
- if use_flip_test:
- img_input_feats = img_feats[:, 0:img_feats.shape[1] // 2] + img_feats[:, img_feats.shape[1] // 2:]
- else:
- img_input_feats = img_feats[:, 0:img_feats.shape[1] // 2]
-
- if use_norm_score:
- img_input_feats = img_input_feats
- else:
- img_input_feats = img_input_feats / np.sqrt(np.sum(img_input_feats ** 2, -1, keepdims=True))
-
- if use_detector_score:
- print(img_input_feats.shape, faceness_scores.shape)
- img_input_feats = img_input_feats * faceness_scores[:, np.newaxis]
- else:
- img_input_feats = img_input_feats
-
- template_norm_feats, unique_templates = image2template_feature(
- img_input_feats, templates, medias)
- stop = timeit.default_timer()
- print('Time: %.2f s. ' % (stop - start))
-
- start = timeit.default_timer()
- score = verification(template_norm_feats, unique_templates, p1, p2)
- stop = timeit.default_timer()
- print('Time: %.2f s. ' % (stop - start))
- save_path = os.path.join(args.result_dir, "{}_result".format(args.target))
- if not os.path.exists(save_path):
- os.makedirs(save_path)
- score_save_file = os.path.join(save_path, "{}.npy".format(args.model_root))
- np.save(score_save_file, score)
- files = [score_save_file]
- methods = []
- scores = []
- for file in files:
- methods.append(os.path.basename(file))
- scores.append(np.load(file))
- methods = np.array(methods)
- scores = dict(zip(methods, scores))
- x_labels = [10 ** -6, 10 ** -5, 10 ** -4, 10 ** -3, 10 ** -2, 10 ** -1]
- tpr_fpr_table = prettytable.PrettyTable(['Methods'] + [str(x) for x in x_labels])
- for method in methods:
- fpr, tpr, _ = roc_curve(label, scores[method])
- fpr = np.flipud(fpr)
- tpr = np.flipud(tpr)
- tpr_fpr_row = []
- tpr_fpr_row.append("%s-%s" % (method, args.target))
- for fpr_iter in np.arange(len(x_labels)):
- _, min_index = min(
- list(zip(abs(fpr - x_labels[fpr_iter]), range(len(fpr)))))
- tpr_fpr_row.append('%.2f' % (tpr[min_index] * 100))
- tpr_fpr_table.add_row(tpr_fpr_row)
- print(tpr_fpr_table)
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser(description='do ijb test')
- # general
- parser.add_argument('--model-root', default='', help='path to load model.')
- parser.add_argument('--image-path', default='', type=str, help='')
- parser.add_argument('--result-dir', default='.', type=str, help='')
- parser.add_argument('--target', default='IJBC', type=str, help='target, set to IJBC or IJBB')
- main(parser.parse_args())
diff --git a/spaces/kevinwang676/VoiceChangers/src/face3d/util/my_awing_arch.py b/spaces/kevinwang676/VoiceChangers/src/face3d/util/my_awing_arch.py
deleted file mode 100644
index cd5656177dc5a1dde82ffee5d43434bc5e69c88e..0000000000000000000000000000000000000000
--- a/spaces/kevinwang676/VoiceChangers/src/face3d/util/my_awing_arch.py
+++ /dev/null
@@ -1,378 +0,0 @@
-import cv2
-import numpy as np
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-
-
-def calculate_points(heatmaps):
- # change heatmaps to landmarks
- B, N, H, W = heatmaps.shape
- HW = H * W
- BN_range = np.arange(B * N)
-
- heatline = heatmaps.reshape(B, N, HW)
- indexes = np.argmax(heatline, axis=2)
-
- preds = np.stack((indexes % W, indexes // W), axis=2)
- preds = preds.astype(np.float, copy=False)
-
- inr = indexes.ravel()
-
- heatline = heatline.reshape(B * N, HW)
- x_up = heatline[BN_range, inr + 1]
- x_down = heatline[BN_range, inr - 1]
- # y_up = heatline[BN_range, inr + W]
-
- if any((inr + W) >= 4096):
- y_up = heatline[BN_range, 4095]
- else:
- y_up = heatline[BN_range, inr + W]
- if any((inr - W) <= 0):
- y_down = heatline[BN_range, 0]
- else:
- y_down = heatline[BN_range, inr - W]
-
- think_diff = np.sign(np.stack((x_up - x_down, y_up - y_down), axis=1))
- think_diff *= .25
-
- preds += think_diff.reshape(B, N, 2)
- preds += .5
- return preds
-
-
-class AddCoordsTh(nn.Module):
-
- def __init__(self, x_dim=64, y_dim=64, with_r=False, with_boundary=False):
- super(AddCoordsTh, self).__init__()
- self.x_dim = x_dim
- self.y_dim = y_dim
- self.with_r = with_r
- self.with_boundary = with_boundary
-
- def forward(self, input_tensor, heatmap=None):
- """
- input_tensor: (batch, c, x_dim, y_dim)
- """
- batch_size_tensor = input_tensor.shape[0]
-
- xx_ones = torch.ones([1, self.y_dim], dtype=torch.int32, device=input_tensor.device)
- xx_ones = xx_ones.unsqueeze(-1)
-
- xx_range = torch.arange(self.x_dim, dtype=torch.int32, device=input_tensor.device).unsqueeze(0)
- xx_range = xx_range.unsqueeze(1)
-
- xx_channel = torch.matmul(xx_ones.float(), xx_range.float())
- xx_channel = xx_channel.unsqueeze(-1)
-
- yy_ones = torch.ones([1, self.x_dim], dtype=torch.int32, device=input_tensor.device)
- yy_ones = yy_ones.unsqueeze(1)
-
- yy_range = torch.arange(self.y_dim, dtype=torch.int32, device=input_tensor.device).unsqueeze(0)
- yy_range = yy_range.unsqueeze(-1)
-
- yy_channel = torch.matmul(yy_range.float(), yy_ones.float())
- yy_channel = yy_channel.unsqueeze(-1)
-
- xx_channel = xx_channel.permute(0, 3, 2, 1)
- yy_channel = yy_channel.permute(0, 3, 2, 1)
-
- xx_channel = xx_channel / (self.x_dim - 1)
- yy_channel = yy_channel / (self.y_dim - 1)
-
- xx_channel = xx_channel * 2 - 1
- yy_channel = yy_channel * 2 - 1
-
- xx_channel = xx_channel.repeat(batch_size_tensor, 1, 1, 1)
- yy_channel = yy_channel.repeat(batch_size_tensor, 1, 1, 1)
-
- if self.with_boundary and heatmap is not None:
- boundary_channel = torch.clamp(heatmap[:, -1:, :, :], 0.0, 1.0)
-
- zero_tensor = torch.zeros_like(xx_channel)
- xx_boundary_channel = torch.where(boundary_channel > 0.05, xx_channel, zero_tensor)
- yy_boundary_channel = torch.where(boundary_channel > 0.05, yy_channel, zero_tensor)
- if self.with_boundary and heatmap is not None:
- xx_boundary_channel = xx_boundary_channel.to(input_tensor.device)
- yy_boundary_channel = yy_boundary_channel.to(input_tensor.device)
- ret = torch.cat([input_tensor, xx_channel, yy_channel], dim=1)
-
- if self.with_r:
- rr = torch.sqrt(torch.pow(xx_channel, 2) + torch.pow(yy_channel, 2))
- rr = rr / torch.max(rr)
- ret = torch.cat([ret, rr], dim=1)
-
- if self.with_boundary and heatmap is not None:
- ret = torch.cat([ret, xx_boundary_channel, yy_boundary_channel], dim=1)
- return ret
-
-
-class CoordConvTh(nn.Module):
- """CoordConv layer as in the paper."""
-
- def __init__(self, x_dim, y_dim, with_r, with_boundary, in_channels, first_one=False, *args, **kwargs):
- super(CoordConvTh, self).__init__()
- self.addcoords = AddCoordsTh(x_dim=x_dim, y_dim=y_dim, with_r=with_r, with_boundary=with_boundary)
- in_channels += 2
- if with_r:
- in_channels += 1
- if with_boundary and not first_one:
- in_channels += 2
- self.conv = nn.Conv2d(in_channels=in_channels, *args, **kwargs)
-
- def forward(self, input_tensor, heatmap=None):
- ret = self.addcoords(input_tensor, heatmap)
- last_channel = ret[:, -2:, :, :]
- ret = self.conv(ret)
- return ret, last_channel
-
-
-def conv3x3(in_planes, out_planes, strd=1, padding=1, bias=False, dilation=1):
- '3x3 convolution with padding'
- return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=strd, padding=padding, bias=bias, dilation=dilation)
-
-
-class BasicBlock(nn.Module):
- expansion = 1
-
- def __init__(self, inplanes, planes, stride=1, downsample=None):
- super(BasicBlock, self).__init__()
- self.conv1 = conv3x3(inplanes, planes, stride)
- # self.bn1 = nn.BatchNorm2d(planes)
- self.relu = nn.ReLU(inplace=True)
- self.conv2 = conv3x3(planes, planes)
- # self.bn2 = nn.BatchNorm2d(planes)
- self.downsample = downsample
- self.stride = stride
-
- def forward(self, x):
- residual = x
-
- out = self.conv1(x)
- out = self.relu(out)
-
- out = self.conv2(out)
-
- if self.downsample is not None:
- residual = self.downsample(x)
-
- out += residual
- out = self.relu(out)
-
- return out
-
-
-class ConvBlock(nn.Module):
-
- def __init__(self, in_planes, out_planes):
- super(ConvBlock, self).__init__()
- self.bn1 = nn.BatchNorm2d(in_planes)
- self.conv1 = conv3x3(in_planes, int(out_planes / 2))
- self.bn2 = nn.BatchNorm2d(int(out_planes / 2))
- self.conv2 = conv3x3(int(out_planes / 2), int(out_planes / 4), padding=1, dilation=1)
- self.bn3 = nn.BatchNorm2d(int(out_planes / 4))
- self.conv3 = conv3x3(int(out_planes / 4), int(out_planes / 4), padding=1, dilation=1)
-
- if in_planes != out_planes:
- self.downsample = nn.Sequential(
- nn.BatchNorm2d(in_planes),
- nn.ReLU(True),
- nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=1, bias=False),
- )
- else:
- self.downsample = None
-
- def forward(self, x):
- residual = x
-
- out1 = self.bn1(x)
- out1 = F.relu(out1, True)
- out1 = self.conv1(out1)
-
- out2 = self.bn2(out1)
- out2 = F.relu(out2, True)
- out2 = self.conv2(out2)
-
- out3 = self.bn3(out2)
- out3 = F.relu(out3, True)
- out3 = self.conv3(out3)
-
- out3 = torch.cat((out1, out2, out3), 1)
-
- if self.downsample is not None:
- residual = self.downsample(residual)
-
- out3 += residual
-
- return out3
-
-
-class HourGlass(nn.Module):
-
- def __init__(self, num_modules, depth, num_features, first_one=False):
- super(HourGlass, self).__init__()
- self.num_modules = num_modules
- self.depth = depth
- self.features = num_features
- self.coordconv = CoordConvTh(
- x_dim=64,
- y_dim=64,
- with_r=True,
- with_boundary=True,
- in_channels=256,
- first_one=first_one,
- out_channels=256,
- kernel_size=1,
- stride=1,
- padding=0)
- self._generate_network(self.depth)
-
- def _generate_network(self, level):
- self.add_module('b1_' + str(level), ConvBlock(256, 256))
-
- self.add_module('b2_' + str(level), ConvBlock(256, 256))
-
- if level > 1:
- self._generate_network(level - 1)
- else:
- self.add_module('b2_plus_' + str(level), ConvBlock(256, 256))
-
- self.add_module('b3_' + str(level), ConvBlock(256, 256))
-
- def _forward(self, level, inp):
- # Upper branch
- up1 = inp
- up1 = self._modules['b1_' + str(level)](up1)
-
- # Lower branch
- low1 = F.avg_pool2d(inp, 2, stride=2)
- low1 = self._modules['b2_' + str(level)](low1)
-
- if level > 1:
- low2 = self._forward(level - 1, low1)
- else:
- low2 = low1
- low2 = self._modules['b2_plus_' + str(level)](low2)
-
- low3 = low2
- low3 = self._modules['b3_' + str(level)](low3)
-
- up2 = F.interpolate(low3, scale_factor=2, mode='nearest')
-
- return up1 + up2
-
- def forward(self, x, heatmap):
- x, last_channel = self.coordconv(x, heatmap)
- return self._forward(self.depth, x), last_channel
-
-
-class FAN(nn.Module):
-
- def __init__(self, num_modules=1, end_relu=False, gray_scale=False, num_landmarks=68, device='cuda'):
- super(FAN, self).__init__()
- self.device = device
- self.num_modules = num_modules
- self.gray_scale = gray_scale
- self.end_relu = end_relu
- self.num_landmarks = num_landmarks
-
- # Base part
- if self.gray_scale:
- self.conv1 = CoordConvTh(
- x_dim=256,
- y_dim=256,
- with_r=True,
- with_boundary=False,
- in_channels=3,
- out_channels=64,
- kernel_size=7,
- stride=2,
- padding=3)
- else:
- self.conv1 = CoordConvTh(
- x_dim=256,
- y_dim=256,
- with_r=True,
- with_boundary=False,
- in_channels=3,
- out_channels=64,
- kernel_size=7,
- stride=2,
- padding=3)
- self.bn1 = nn.BatchNorm2d(64)
- self.conv2 = ConvBlock(64, 128)
- self.conv3 = ConvBlock(128, 128)
- self.conv4 = ConvBlock(128, 256)
-
- # Stacking part
- for hg_module in range(self.num_modules):
- if hg_module == 0:
- first_one = True
- else:
- first_one = False
- self.add_module('m' + str(hg_module), HourGlass(1, 4, 256, first_one))
- self.add_module('top_m_' + str(hg_module), ConvBlock(256, 256))
- self.add_module('conv_last' + str(hg_module), nn.Conv2d(256, 256, kernel_size=1, stride=1, padding=0))
- self.add_module('bn_end' + str(hg_module), nn.BatchNorm2d(256))
- self.add_module('l' + str(hg_module), nn.Conv2d(256, num_landmarks + 1, kernel_size=1, stride=1, padding=0))
-
- if hg_module < self.num_modules - 1:
- self.add_module('bl' + str(hg_module), nn.Conv2d(256, 256, kernel_size=1, stride=1, padding=0))
- self.add_module('al' + str(hg_module),
- nn.Conv2d(num_landmarks + 1, 256, kernel_size=1, stride=1, padding=0))
-
- def forward(self, x):
- x, _ = self.conv1(x)
- x = F.relu(self.bn1(x), True)
- # x = F.relu(self.bn1(self.conv1(x)), True)
- x = F.avg_pool2d(self.conv2(x), 2, stride=2)
- x = self.conv3(x)
- x = self.conv4(x)
-
- previous = x
-
- outputs = []
- boundary_channels = []
- tmp_out = None
- for i in range(self.num_modules):
- hg, boundary_channel = self._modules['m' + str(i)](previous, tmp_out)
-
- ll = hg
- ll = self._modules['top_m_' + str(i)](ll)
-
- ll = F.relu(self._modules['bn_end' + str(i)](self._modules['conv_last' + str(i)](ll)), True)
-
- # Predict heatmaps
- tmp_out = self._modules['l' + str(i)](ll)
- if self.end_relu:
- tmp_out = F.relu(tmp_out) # HACK: Added relu
- outputs.append(tmp_out)
- boundary_channels.append(boundary_channel)
-
- if i < self.num_modules - 1:
- ll = self._modules['bl' + str(i)](ll)
- tmp_out_ = self._modules['al' + str(i)](tmp_out)
- previous = previous + ll + tmp_out_
-
- return outputs, boundary_channels
-
- def get_landmarks(self, img):
- H, W, _ = img.shape
- offset = W / 64, H / 64, 0, 0
-
- img = cv2.resize(img, (256, 256))
- inp = img[..., ::-1]
- inp = torch.from_numpy(np.ascontiguousarray(inp.transpose((2, 0, 1)))).float()
- inp = inp.to(self.device)
- inp.div_(255.0).unsqueeze_(0)
-
- outputs, _ = self.forward(inp)
- out = outputs[-1][:, :-1, :, :]
- heatmaps = out.detach().cpu().numpy()
-
- pred = calculate_points(heatmaps).reshape(-1, 2)
-
- pred *= offset[:2]
- pred += offset[-2:]
-
- return pred
diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/scale.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/scale.py
deleted file mode 100644
index c905fffcc8bf998d18d94f927591963c428025e2..0000000000000000000000000000000000000000
--- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/cnn/bricks/scale.py
+++ /dev/null
@@ -1,21 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-import torch
-import torch.nn as nn
-
-
-class Scale(nn.Module):
- """A learnable scale parameter.
-
- This layer scales the input by a learnable factor. It multiplies a
- learnable scale parameter of shape (1,) with input of any shape.
-
- Args:
- scale (float): Initial value of scale factor. Default: 1.0
- """
-
- def __init__(self, scale=1.0):
- super(Scale, self).__init__()
- self.scale = nn.Parameter(torch.tensor(scale, dtype=torch.float))
-
- def forward(self, x):
- return x * self.scale
diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/visualization/optflow.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/visualization/optflow.py
deleted file mode 100644
index c3870c700f7c946177ee5d536ce3f6c814a77ce7..0000000000000000000000000000000000000000
--- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/visualization/optflow.py
+++ /dev/null
@@ -1,112 +0,0 @@
-# Copyright (c) OpenMMLab. All rights reserved.
-from __future__ import division
-
-import numpy as np
-
-from annotator.uniformer.mmcv.image import rgb2bgr
-from annotator.uniformer.mmcv.video import flowread
-from .image import imshow
-
-
-def flowshow(flow, win_name='', wait_time=0):
- """Show optical flow.
-
- Args:
- flow (ndarray or str): The optical flow to be displayed.
- win_name (str): The window name.
- wait_time (int): Value of waitKey param.
- """
- flow = flowread(flow)
- flow_img = flow2rgb(flow)
- imshow(rgb2bgr(flow_img), win_name, wait_time)
-
-
-def flow2rgb(flow, color_wheel=None, unknown_thr=1e6):
- """Convert flow map to RGB image.
-
- Args:
- flow (ndarray): Array of optical flow.
- color_wheel (ndarray or None): Color wheel used to map flow field to
- RGB colorspace. Default color wheel will be used if not specified.
- unknown_thr (str): Values above this threshold will be marked as
- unknown and thus ignored.
-
- Returns:
- ndarray: RGB image that can be visualized.
- """
- assert flow.ndim == 3 and flow.shape[-1] == 2
- if color_wheel is None:
- color_wheel = make_color_wheel()
- assert color_wheel.ndim == 2 and color_wheel.shape[1] == 3
- num_bins = color_wheel.shape[0]
-
- dx = flow[:, :, 0].copy()
- dy = flow[:, :, 1].copy()
-
- ignore_inds = (
- np.isnan(dx) | np.isnan(dy) | (np.abs(dx) > unknown_thr) |
- (np.abs(dy) > unknown_thr))
- dx[ignore_inds] = 0
- dy[ignore_inds] = 0
-
- rad = np.sqrt(dx**2 + dy**2)
- if np.any(rad > np.finfo(float).eps):
- max_rad = np.max(rad)
- dx /= max_rad
- dy /= max_rad
-
- rad = np.sqrt(dx**2 + dy**2)
- angle = np.arctan2(-dy, -dx) / np.pi
-
- bin_real = (angle + 1) / 2 * (num_bins - 1)
- bin_left = np.floor(bin_real).astype(int)
- bin_right = (bin_left + 1) % num_bins
- w = (bin_real - bin_left.astype(np.float32))[..., None]
- flow_img = (1 -
- w) * color_wheel[bin_left, :] + w * color_wheel[bin_right, :]
- small_ind = rad <= 1
- flow_img[small_ind] = 1 - rad[small_ind, None] * (1 - flow_img[small_ind])
- flow_img[np.logical_not(small_ind)] *= 0.75
-
- flow_img[ignore_inds, :] = 0
-
- return flow_img
-
-
-def make_color_wheel(bins=None):
- """Build a color wheel.
-
- Args:
- bins(list or tuple, optional): Specify the number of bins for each
- color range, corresponding to six ranges: red -> yellow,
- yellow -> green, green -> cyan, cyan -> blue, blue -> magenta,
- magenta -> red. [15, 6, 4, 11, 13, 6] is used for default
- (see Middlebury).
-
- Returns:
- ndarray: Color wheel of shape (total_bins, 3).
- """
- if bins is None:
- bins = [15, 6, 4, 11, 13, 6]
- assert len(bins) == 6
-
- RY, YG, GC, CB, BM, MR = tuple(bins)
-
- ry = [1, np.arange(RY) / RY, 0]
- yg = [1 - np.arange(YG) / YG, 1, 0]
- gc = [0, 1, np.arange(GC) / GC]
- cb = [0, 1 - np.arange(CB) / CB, 1]
- bm = [np.arange(BM) / BM, 0, 1]
- mr = [1, 0, 1 - np.arange(MR) / MR]
-
- num_bins = RY + YG + GC + CB + BM + MR
-
- color_wheel = np.zeros((3, num_bins), dtype=np.float32)
-
- col = 0
- for i, color in enumerate([ry, yg, gc, cb, bm, mr]):
- for j in range(3):
- color_wheel[j, col:col + bins[i]] = color[j]
- col += bins[i]
-
- return color_wheel.T
diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/necks/multilevel_neck.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/necks/multilevel_neck.py
deleted file mode 100644
index 766144d8136326a1fab5906a153a0c0df69b6b60..0000000000000000000000000000000000000000
--- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/necks/multilevel_neck.py
+++ /dev/null
@@ -1,70 +0,0 @@
-import torch.nn as nn
-import torch.nn.functional as F
-from annotator.uniformer.mmcv.cnn import ConvModule
-
-from ..builder import NECKS
-
-
-@NECKS.register_module()
-class MultiLevelNeck(nn.Module):
- """MultiLevelNeck.
-
- A neck structure connect vit backbone and decoder_heads.
- Args:
- in_channels (List[int]): Number of input channels per scale.
- out_channels (int): Number of output channels (used at each scale).
- scales (List[int]): Scale factors for each input feature map.
- norm_cfg (dict): Config dict for normalization layer. Default: None.
- act_cfg (dict): Config dict for activation layer in ConvModule.
- Default: None.
- """
-
- def __init__(self,
- in_channels,
- out_channels,
- scales=[0.5, 1, 2, 4],
- norm_cfg=None,
- act_cfg=None):
- super(MultiLevelNeck, self).__init__()
- assert isinstance(in_channels, list)
- self.in_channels = in_channels
- self.out_channels = out_channels
- self.scales = scales
- self.num_outs = len(scales)
- self.lateral_convs = nn.ModuleList()
- self.convs = nn.ModuleList()
- for in_channel in in_channels:
- self.lateral_convs.append(
- ConvModule(
- in_channel,
- out_channels,
- kernel_size=1,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg))
- for _ in range(self.num_outs):
- self.convs.append(
- ConvModule(
- out_channels,
- out_channels,
- kernel_size=3,
- padding=1,
- stride=1,
- norm_cfg=norm_cfg,
- act_cfg=act_cfg))
-
- def forward(self, inputs):
- assert len(inputs) == len(self.in_channels)
- print(inputs[0].shape)
- inputs = [
- lateral_conv(inputs[i])
- for i, lateral_conv in enumerate(self.lateral_convs)
- ]
- # for len(inputs) not equal to self.num_outs
- if len(inputs) == 1:
- inputs = [inputs[0] for _ in range(self.num_outs)]
- outs = []
- for i in range(self.num_outs):
- x_resize = F.interpolate(
- inputs[i], scale_factor=self.scales[i], mode='bilinear')
- outs.append(self.convs[i](x_resize))
- return tuple(outs)
diff --git a/spaces/krazyxki/V-1488abed/src/proxy/rewriters/transform-kobold-payload.ts b/spaces/krazyxki/V-1488abed/src/proxy/rewriters/transform-kobold-payload.ts
deleted file mode 100644
index 63fe5452b42cb3ad1f54e58efdae7d29b7e2a6a5..0000000000000000000000000000000000000000
--- a/spaces/krazyxki/V-1488abed/src/proxy/rewriters/transform-kobold-payload.ts
+++ /dev/null
@@ -1,92 +0,0 @@
-import { config } from "../../config";
-import type { ExpressHttpProxyReqCallback } from ".";
-import { logger } from "../../logger";
-
-// Kobold requests look like this:
-// body:
-// {
-// prompt: "Aqua is character from Konosuba anime. Aqua is a goddess, before life in the Fantasy World, she was a goddess of water who guided humans to the afterlife. Aqua looks like young woman with beauty no human could match. Aqua has light blue hair, blue eyes, slim figure, long legs, wide hips, blue waist-long hair that is partially tied into a loop with a spherical clip. Aqua's measurements are 83-56-83 cm. Aqua's height 157cm. Aqua wears sleeveless dark-blue dress with white trimmings, extremely short dark blue miniskirt, green bow around her chest with a blue gem in the middle, detached white sleeves with blue and golden trimmings, thigh-high blue heeled boots over white stockings with blue trimmings. Aqua is very strong in water magic, but a little stupid, so she does not always use it to the place. Aqua is high-spirited, cheerful, carefree. Aqua rarely thinks about the consequences of her actions and always acts or speaks on her whims. Because very easy to taunt Aqua with jeers or lure her with praises.\n" +
-// "Aqua's personality: high-spirited, likes to party, carefree, cheerful.\n" +
-// 'Circumstances and context of the dialogue: Aqua is standing in the city square and is looking for new followers\n' +
-// 'This is how Aqua should talk\n' +
-// 'You: Hi Aqua, I heard you like to spend time in the pub.\n' +
-// "Aqua: *excitedly* Oh my goodness, yes! I just love spending time at the pub! It's so much fun to talk to all the adventurers and hear about their exciting adventures! And you are?\n" +
-// "You: I'm a new here and I wanted to ask for your advice.\n" +
-// 'Aqua: *giggles* Oh, advice! I love giving advice! And in gratitude for that, treat me to a drink! *gives signals to the bartender*\n' +
-// 'This is how Aqua should talk\n' +
-// 'You: Hello\n' +
-// "Aqua: *excitedly* Hello there, dear! Are you new to Axel? Don't worry, I, Aqua the goddess of water, am here to help you! Do you need any assistance? And may I say, I look simply radiant today! *strikes a pose and looks at you with puppy eyes*\n" +
-// '\n' +
-// 'Then the roleplay chat between You and Aqua begins.\n' +
-// "Aqua: *She is in the town square of a city named Axel. It's morning on a Saturday and she suddenly notices a person who looks like they don't know what they're doing. She approaches him and speaks* \n" +
-// '\n' +
-// `"Are you new here? Do you need help? Don't worry! I, Aqua the Goddess of Water, shall help you! Do I look beautiful?" \n` +
-// '\n' +
-// '*She strikes a pose and looks at him with puppy eyes.*\n' +
-// 'You: test\n' +
-// 'You: test\n' +
-// 'You: t\n' +
-// 'You: test\n',
-// use_story: false,
-// use_memory: false,
-// use_authors_note: false,
-// use_world_info: false,
-// max_context_length: 2048,
-// max_length: 180,
-// rep_pen: 1.1,
-// rep_pen_range: 1024,
-// rep_pen_slope: 0.9,
-// temperature: 0.65,
-// tfs: 0.9,
-// top_a: 0,
-// top_k: 0,
-// top_p: 0.9,
-// typical: 1,
-// sampler_order: [
-// 6, 0, 1, 2,
-// 3, 4, 5
-// ],
-// singleline: false
-// }
-
-// OpenAI expects this body:
-// { model: 'gpt-3.5-turbo', temperature: 0.65, top_p: 0.9, max_tokens: 180, messages }
-// there's also a frequency_penalty but it's not clear how that maps to kobold's
-// rep_pen.
-
-// messages is an array of { role: "system" | "assistant" | "user", content: ""}
-// kobold only sends us the entire prompt. we can try to split the last line and
-// use that as the user message and put the rest in the system message
-// ideally we'd split the history into user and assistant messages, but that's
-// too much work for now
-
-/** Transforms a KoboldAI payload into an OpenAI payload. */
-export const transformKoboldPayload: ExpressHttpProxyReqCallback = (
- _proxyReq,
- req
-) => {
- const { body } = req;
- const { prompt, max_length, rep_pen, top_p, temperature } = body;
-
- const promptLines = prompt.split("\n");
- const lastLine = promptLines.pop();
- const messages = [
- { role: "system", content: promptLines.join("\n") },
- { role: "user", content: lastLine },
- ];
-
- // Kobold doesn't select a model. If we were assigned a key that supports
- // gpt4, use it, otherwise use gpt3.5-turbo. If the key was incorrectly
- // assigned, we'll get an error from OpenAI but the key will be downgraded
- // for the next request.
- const model = req.key!.isGpt4 ? "gpt-4" : "gpt-3.5-turbo";
- const newBody = {
- model,
- temperature,
- top_p,
- frequency_penalty: rep_pen, // remove this if model turns schizo
- max_tokens: max_length,
- messages,
- };
- req.body = newBody;
-};
diff --git a/spaces/kukuhtw/AutoGPT/autogpt/utils.py b/spaces/kukuhtw/AutoGPT/autogpt/utils.py
deleted file mode 100644
index e93d5ac740097ee144d1809aea31c0f7fb242fa5..0000000000000000000000000000000000000000
--- a/spaces/kukuhtw/AutoGPT/autogpt/utils.py
+++ /dev/null
@@ -1,77 +0,0 @@
-import os
-
-import requests
-import yaml
-from colorama import Fore
-from git import Repo
-
-
-def clean_input(prompt: str = ""):
- try:
- return input(prompt)
- except KeyboardInterrupt:
- print("You interrupted Auto-GPT")
- print("Quitting...")
- exit(0)
-
-
-def validate_yaml_file(file: str):
- try:
- with open(file, encoding="utf-8") as fp:
- yaml.load(fp.read(), Loader=yaml.FullLoader)
- except FileNotFoundError:
- return (False, f"The file {Fore.CYAN}`{file}`{Fore.RESET} wasn't found")
- except yaml.YAMLError as e:
- return (
- False,
- f"There was an issue while trying to read with your AI Settings file: {e}",
- )
-
- return (True, f"Successfully validated {Fore.CYAN}`{file}`{Fore.RESET}!")
-
-
-def readable_file_size(size, decimal_places=2):
- """Converts the given size in bytes to a readable format.
- Args:
- size: Size in bytes
- decimal_places (int): Number of decimal places to display
- """
- for unit in ["B", "KB", "MB", "GB", "TB"]:
- if size < 1024.0:
- break
- size /= 1024.0
- return f"{size:.{decimal_places}f} {unit}"
-
-
-def get_bulletin_from_web() -> str:
- try:
- response = requests.get(
- "https://raw.githubusercontent.com/Significant-Gravitas/Auto-GPT/master/BULLETIN.md"
- )
- if response.status_code == 200:
- return response.text
- except:
- return ""
-
-
-def get_current_git_branch() -> str:
- try:
- repo = Repo(search_parent_directories=True)
- branch = repo.active_branch
- return branch.name
- except:
- return ""
-
-
-def get_latest_bulletin() -> str:
- exists = os.path.exists("CURRENT_BULLETIN.md")
- current_bulletin = ""
- if exists:
- current_bulletin = open("CURRENT_BULLETIN.md", "r", encoding="utf-8").read()
- new_bulletin = get_bulletin_from_web()
- is_new_news = new_bulletin != current_bulletin
-
- if new_bulletin and is_new_news:
- open("CURRENT_BULLETIN.md", "w", encoding="utf-8").write(new_bulletin)
- return f" {Fore.RED}::UPDATED:: {Fore.CYAN}{new_bulletin}{Fore.RESET}"
- return current_bulletin
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/FontFile.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/FontFile.py
deleted file mode 100644
index 5ec0a6632e3182382467688662ebc5e6c324da91..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/FontFile.py
+++ /dev/null
@@ -1,110 +0,0 @@
-#
-# The Python Imaging Library
-# $Id$
-#
-# base class for raster font file parsers
-#
-# history:
-# 1997-06-05 fl created
-# 1997-08-19 fl restrict image width
-#
-# Copyright (c) 1997-1998 by Secret Labs AB
-# Copyright (c) 1997-1998 by Fredrik Lundh
-#
-# See the README file for information on usage and redistribution.
-#
-
-
-import os
-
-from . import Image, _binary
-
-WIDTH = 800
-
-
-def puti16(fp, values):
- """Write network order (big-endian) 16-bit sequence"""
- for v in values:
- if v < 0:
- v += 65536
- fp.write(_binary.o16be(v))
-
-
-class FontFile:
- """Base class for raster font file handlers."""
-
- bitmap = None
-
- def __init__(self):
- self.info = {}
- self.glyph = [None] * 256
-
- def __getitem__(self, ix):
- return self.glyph[ix]
-
- def compile(self):
- """Create metrics and bitmap"""
-
- if self.bitmap:
- return
-
- # create bitmap large enough to hold all data
- h = w = maxwidth = 0
- lines = 1
- for glyph in self:
- if glyph:
- d, dst, src, im = glyph
- h = max(h, src[3] - src[1])
- w = w + (src[2] - src[0])
- if w > WIDTH:
- lines += 1
- w = src[2] - src[0]
- maxwidth = max(maxwidth, w)
-
- xsize = maxwidth
- ysize = lines * h
-
- if xsize == 0 and ysize == 0:
- return ""
-
- self.ysize = h
-
- # paste glyphs into bitmap
- self.bitmap = Image.new("1", (xsize, ysize))
- self.metrics = [None] * 256
- x = y = 0
- for i in range(256):
- glyph = self[i]
- if glyph:
- d, dst, src, im = glyph
- xx = src[2] - src[0]
- # yy = src[3] - src[1]
- x0, y0 = x, y
- x = x + xx
- if x > WIDTH:
- x, y = 0, y + h
- x0, y0 = x, y
- x = xx
- s = src[0] + x0, src[1] + y0, src[2] + x0, src[3] + y0
- self.bitmap.paste(im.crop(src), s)
- self.metrics[i] = d, dst, s
-
- def save(self, filename):
- """Save font"""
-
- self.compile()
-
- # font data
- self.bitmap.save(os.path.splitext(filename)[0] + ".pbm", "PNG")
-
- # font metrics
- with open(os.path.splitext(filename)[0] + ".pil", "wb") as fp:
- fp.write(b"PILfont\n")
- fp.write(f";;;;;;{self.ysize};\n".encode("ascii")) # HACK!!!
- fp.write(b"DATA\n")
- for id in range(256):
- m = self.metrics[id]
- if not m:
- puti16(fp, [0] * 10)
- else:
- puti16(fp, m[0] + m[1] + m[2])
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/_tkinter_finder.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/_tkinter_finder.py
deleted file mode 100644
index 5cd7e9b1fb28f118a4a444ad150f8a3865527358..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/PIL/_tkinter_finder.py
+++ /dev/null
@@ -1,23 +0,0 @@
-""" Find compiled module linking to Tcl / Tk libraries
-"""
-import sys
-import tkinter
-from tkinter import _tkinter as tk
-
-from ._deprecate import deprecate
-
-try:
- if hasattr(sys, "pypy_find_executable"):
- TKINTER_LIB = tk.tklib_cffi.__file__
- else:
- TKINTER_LIB = tk.__file__
-except AttributeError:
- # _tkinter may be compiled directly into Python, in which case __file__ is
- # not available. load_tkinter_funcs will check the binary first in any case.
- TKINTER_LIB = None
-
-tk_version = str(tkinter.TkVersion)
-if tk_version == "8.4":
- deprecate(
- "Support for Tk/Tcl 8.4", 10, action="Please upgrade to Tk/Tcl 8.5 or newer"
- )
diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-b9cd29e6.js b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-b9cd29e6.js
deleted file mode 100644
index f221f861551df22515d9e115d28b6ec480a7b054..0000000000000000000000000000000000000000
--- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-b9cd29e6.js
+++ /dev/null
@@ -1,5 +0,0 @@
-import{S as L,i as Z,s as F,B as A,C as _,g as p,E as y,F as z,q as v,G as T,r as ne,aj as J,H as C,N as G,I as D,K as R,J as V,a0 as ce,M as B,D as I,e as E,m as N,p as H,t as j,n as O,x as ue,$ as _e,f as me,h as ge,j as de,l as K,o as Y,y as he}from"./index-8c3da1d9.js";import{g as be}from"./color-75f3ed8f.js";import{B as pe}from"./Button-62634b34.js";import{B as ve}from"./BlockLabel-98ef75ee.js";import{E as ke}from"./Empty-5d52e655.js";/* empty css */function ye(t){let e,n,l;return{c(){e=A("svg"),n=A("path"),l=A("path"),_(n,"fill","currentColor"),_(n,"d","M12 15H5a3 3 0 0 1-3-3v-2a3 3 0 0 1 3-3h5V5a1 1 0 0 0-1-1H3V2h6a3 3 0 0 1 3 3zM5 9a1 1 0 0 0-1 1v2a1 1 0 0 0 1 1h5V9zm15 14v2a1 1 0 0 0 1 1h5v-4h-5a1 1 0 0 0-1 1z"),_(l,"fill","currentColor"),_(l,"d","M2 30h28V2Zm26-2h-7a3 3 0 0 1-3-3v-2a3 3 0 0 1 3-3h5v-2a1 1 0 0 0-1-1h-6v-2h6a3 3 0 0 1 3 3Z"),_(e,"xmlns","http://www.w3.org/2000/svg"),_(e,"xmlns:xlink","http://www.w3.org/1999/xlink"),_(e,"aria-hidden","true"),_(e,"role","img"),_(e,"class","iconify iconify--carbon"),_(e,"width","100%"),_(e,"height","100%"),_(e,"preserveAspectRatio","xMidYMid meet"),_(e,"viewBox","0 0 32 32")},m(a,s){p(a,e,s),y(e,n),y(e,l)},p:z,i:z,o:z,d(a){a&&v(e)}}}class oe extends L{constructor(e){super(),Z(this,e,null,ye,F,{})}}function P(t,e,n){const l=t.slice();return l[18]=e[n][0],l[24]=e[n][1],l}function Q(t,e,n){const l=t.slice();return l[18]=e[n][0],l[19]=e[n][1],l[21]=n,l}function U(t,e,n){const l=t.slice();return l[19]=e[n][0],l[22]=e[n][1],l[21]=n,l}function we(t){let e,n,l=t[1]&&W(),a=t[0],s=[];for(let o=0;o-1
- 0
- +1 `,_(e,"class","color-legend svelte-19on2m6"),_(e,"data-testid","highlighted-text:color-legend")},m(n,l){p(n,e,l)},d(n){n&&v(e)}}}function X(t){let e,n,l=t[18]+"",a,s,o;return{c(){e=T("span"),n=T("span"),a=D(l),s=C(),_(n,"class","text svelte-19on2m6"),_(e,"class","textspan score-text svelte-19on2m6"),_(e,"style",o="background-color: rgba("+(t[24]<0?"128, 90, 213,"+-t[24]:"239, 68, 60,"+t[24])+")")},m(r,i){p(r,e,i),y(e,n),y(n,a),y(e,s)},p(r,i){i&1&&l!==(l=r[18]+"")&&R(a,l),i&1&&o!==(o="background-color: rgba("+(r[24]<0?"128, 90, 213,"+-r[24]:"239, 68, 60,"+r[24])+")")&&_(e,"style",o)},d(r){r&&v(e)}}}function $(t){let e,n=Object.entries(t[3]),l=[];for(let a=0;af(h),S=h=>f(h),se=()=>b(),ae=()=>b(),ie=(h,k,w)=>{g("select",{index:h,value:[k,w]})};return t.$$set=h=>{"value"in h&&n(0,a=h.value),"show_legend"in h&&n(1,s=h.show_legend),"color_map"in h&&n(9,o=h.color_map),"selectable"in h&&n(2,r=h.selectable)},t.$$.update=()=>{if(t.$$.dirty&513){let h=function(){for(const k in o){const w=o[k].trim();w in J?n(3,c[k]=J[w],c):n(3,c[k]={primary:l?d(o[k],1):o[k],secondary:l?d(o[k],.5):o[k]},c)}};if(o||n(9,o={}),a.length>0){for(let[k,w]of a)if(w!==null)if(typeof w=="string"){if(n(5,M="categories"),!(w in o)){let q=be(Object.keys(o).length);n(9,o[w]=q,o)}}else n(5,M="scores")}h()}},[a,s,r,c,u,M,g,f,b,o,m,S,se,ae,ie]}class Me extends L{constructor(e){super(),Z(this,e,je,Te,F,{value:0,show_legend:1,color_map:9,selectable:2})}}function te(t){let e,n;return e=new ve({props:{Icon:oe,label:t[6],float:!1,disable:typeof t[0].container=="boolean"&&!t[0].container}}),{c(){E(e.$$.fragment)},m(l,a){N(e,l,a),n=!0},p(l,a){const s={};a&64&&(s.label=l[6]),a&1&&(s.disable=typeof l[0].container=="boolean"&&!l[0].container),e.$set(s)},i(l){n||(H(e.$$.fragment,l),n=!0)},o(l){j(e.$$.fragment,l),n=!1},d(l){O(e,l)}}}function Be(t){let e,n;return e=new ke({props:{$$slots:{default:[Ee]},$$scope:{ctx:t}}}),{c(){E(e.$$.fragment)},m(l,a){N(e,l,a),n=!0},p(l,a){const s={};a&8192&&(s.$$scope={dirty:a,ctx:l}),e.$set(s)},i(l){n||(H(e.$$.fragment,l),n=!0)},o(l){j(e.$$.fragment,l),n=!1},d(l){O(e,l)}}}function Ce(t){let e,n;return e=new Me({props:{selectable:t[7],value:t[4],show_legend:t[5],color_map:t[0].color_map}}),e.$on("select",t[11]),{c(){E(e.$$.fragment)},m(l,a){N(e,l,a),n=!0},p(l,a){const s={};a&128&&(s.selectable=l[7]),a&16&&(s.value=l[4]),a&32&&(s.show_legend=l[5]),a&1&&(s.color_map=l[0].color_map),e.$set(s)},i(l){n||(H(e.$$.fragment,l),n=!0)},o(l){j(e.$$.fragment,l),n=!1},d(l){O(e,l)}}}function Ee(t){let e,n;return e=new oe({}),{c(){E(e.$$.fragment)},m(l,a){N(e,l,a),n=!0},i(l){n||(H(e.$$.fragment,l),n=!0)},o(l){j(e.$$.fragment,l),n=!1},d(l){O(e,l)}}}function Ne(t){let e,n,l,a,s,o,r;const i=[t[8]];let c={};for(let f=0;f{u=null}),Y());let S=a;a=M(f),a===S?g[a].p(f,b):(K(),j(g[S],1,1,()=>{g[S]=null}),Y(),s=g[a],s?s.p(f,b):(s=g[a]=d[a](f),s.c()),H(s,1),s.m(o.parentNode,o))},i(f){r||(H(e.$$.fragment,f),H(u),H(s),r=!0)},o(f){j(e.$$.fragment,f),j(u),j(s),r=!1},d(f){O(e,f),f&&v(n),u&&u.d(f),f&&v(l),g[a].d(f),f&&v(o)}}}function Oe(t){let e,n;return e=new pe({props:{test_id:"highlighted-text",visible:t[3],elem_id:t[1],elem_classes:t[2],padding:!1,disable:typeof t[0].container=="boolean"&&!t[0].container,$$slots:{default:[Ne]},$$scope:{ctx:t}}}),{c(){E(e.$$.fragment)},m(l,a){N(e,l,a),n=!0},p(l,[a]){const s={};a&8&&(s.visible=l[3]),a&2&&(s.elem_id=l[1]),a&4&&(s.elem_classes=l[2]),a&1&&(s.disable=typeof l[0].container=="boolean"&&!l[0].container),a&8689&&(s.$$scope={dirty:a,ctx:l}),e.$set(s)},i(l){n||(H(e.$$.fragment,l),n=!0)},o(l){j(e.$$.fragment,l),n=!1},d(l){O(e,l)}}}function Se(t,e,n){let{elem_id:l=""}=e,{elem_classes:a=[]}=e,{visible:s=!0}=e,{value:o}=e,r,{show_legend:i}=e,{color_map:c={}}=e,{label:u="Highlighted Text"}=e,{style:d={}}=e,{selectable:g=!1}=e,{loading_status:M}=e;const f=ne();function b(m){he.call(this,t,m)}return t.$$set=m=>{"elem_id"in m&&n(1,l=m.elem_id),"elem_classes"in m&&n(2,a=m.elem_classes),"visible"in m&&n(3,s=m.visible),"value"in m&&n(4,o=m.value),"show_legend"in m&&n(5,i=m.show_legend),"color_map"in m&&n(9,c=m.color_map),"label"in m&&n(6,u=m.label),"style"in m&&n(0,d=m.style),"selectable"in m&&n(7,g=m.selectable),"loading_status"in m&&n(8,M=m.loading_status)},t.$$.update=()=>{t.$$.dirty&513&&!d.color_map&&Object.keys(c).length&&n(0,d.color_map=c,d),t.$$.dirty&1040&&o!==r&&(n(10,r=o),f("change"))},[d,l,a,s,o,i,u,g,M,c,r,b]}class Ve extends L{constructor(e){super(),Z(this,e,Se,Oe,F,{elem_id:1,elem_classes:2,visible:3,value:4,show_legend:5,color_map:9,label:6,style:0,selectable:7,loading_status:8})}}const Le=Ve,Ze=["static"],Fe=t=>({type:{payload:"Array<[string, string | number]>"},description:{payload:"list of text spans and corresponding label / value"}});export{Le as Component,Fe as document,Ze as modes};
-//# sourceMappingURL=index-b9cd29e6.js.map
diff --git a/spaces/lamini/instruct-3b-playground/README.md b/spaces/lamini/instruct-3b-playground/README.md
deleted file mode 100644
index 5c76a5f4d46dea9fb57f6b44bd7fbe0a10787c3e..0000000000000000000000000000000000000000
--- a/spaces/lamini/instruct-3b-playground/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Instruct 3b Playground
-emoji: 🐨
-colorFrom: purple
-colorTo: gray
-sdk: gradio
-sdk_version: 3.27.0
-app_file: app.py
-pinned: false
-license: cc-by-4.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
-Using lamini/instruct-tuned-3b model, reference at https://huggingface.co/lamini/instruct-tuned-3b
\ No newline at end of file
diff --git a/spaces/lewisrxliu/1/README.md b/spaces/lewisrxliu/1/README.md
deleted file mode 100644
index 9300c19fde80a0c0131ab2cc09a74aad50faefd7..0000000000000000000000000000000000000000
--- a/spaces/lewisrxliu/1/README.md
+++ /dev/null
@@ -1,13 +0,0 @@
----
-title: Lewis
-emoji: 🏢
-colorFrom: yellow
-colorTo: purple
-sdk: gradio
-sdk_version: 3.22.1
-app_file: app.py
-pinned: false
-duplicated_from: lewisrxliu/lewis
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/lewiswu1209/MockingBird/web/api/__init__.py b/spaces/lewiswu1209/MockingBird/web/api/__init__.py
deleted file mode 100644
index a0c8726d6b4456830e947b7165cf77ff1879361f..0000000000000000000000000000000000000000
--- a/spaces/lewiswu1209/MockingBird/web/api/__init__.py
+++ /dev/null
@@ -1,16 +0,0 @@
-from flask import Blueprint
-from flask_restx import Api
-from .audio import api as audio
-from .synthesizer import api as synthesizer
-
-api_blueprint = Blueprint('api', __name__, url_prefix='/api')
-
-api = Api(
- app=api_blueprint,
- title='Mocking Bird',
- version='1.0',
- description='My API'
-)
-
-api.add_namespace(audio)
-api.add_namespace(synthesizer)
\ No newline at end of file
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Dc Unlocker 2 Client Free High Quality Username And Password.rar.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Dc Unlocker 2 Client Free High Quality Username And Password.rar.md
deleted file mode 100644
index 0600fe85da566af33067429e8fb78af4898c68b4..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Dc Unlocker 2 Client Free High Quality Username And Password.rar.md
+++ /dev/null
@@ -1,6 +0,0 @@
-dc unlocker 2 client free username and password.rar Download Zip --->>> https://bytlly.com/2uGwrz
-
-dc unlocker 2 client free username and password.rar. 4d29de3e1b
-
-
-
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Dct4 Calculator 54 Download PORTABLE.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Dct4 Calculator 54 Download PORTABLE.md
deleted file mode 100644
index 070a3a11323e453afc4397d2e8e87a01993a920a..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Dct4 Calculator 54 Download PORTABLE.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Dct4 Calculator 54 Download Download » https://bytlly.com/2uGvBU
-
-Beranda » Nokia Mobile Repair » Nokia DCT4(+) RPL Calculator ... This is a free release, with the hope will be usefull to the community. Download OR 4d29de3e1b
-
-
-
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/FULL Dialog Naskah Drama Sangkuriang Bahasa Jawa 5 Orang 2021.md b/spaces/lincquiQcaudo/Top-20-Diffusion/FULL Dialog Naskah Drama Sangkuriang Bahasa Jawa 5 Orang 2021.md
deleted file mode 100644
index 83e42834767797dd9c2fc071fd50de300e3438c9..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/FULL Dialog Naskah Drama Sangkuriang Bahasa Jawa 5 Orang 2021.md
+++ /dev/null
@@ -1,132 +0,0 @@
-
-FULL Dialog Naskah Drama Sangkuriang Bahasa Jawa 5 Orang
-
-Apakah Anda suka menonton atau membaca drama seri? Jika ya, maka Anda pasti tidak asing dengan drama Sangkuriang. Drama ini adalah salah satu drama seri yang paling populer dan terkenal di Indonesia, khususnya di daerah Jawa. Drama ini mengisahkan tentang legenda Sangkuriang, seorang putra yang jatuh cinta dengan ibunya sendiri tanpa menyadarinya. Kisah ini penuh dengan konflik, intrik, dan nilai-nilai budaya yang dapat memberikan pelajaran bagi para penonton.
-
-Drama Sangkuriang dibintangi oleh lima orang, yaitu Sangkuriang, Dayang Sumbi, Tumang, Si Gareng, dan Ki Buyut. Drama ini berdurasi panjang dan diiringi oleh musik tradisional. Drama ini banyak ditayangkan di berbagai media, seperti radio, televisi, dan internet. Anda dapat menonton atau mendengarkan drama ini secara online di SoundCloud atau SlideServe. Anda juga dapat mendownload dialog naskah drama Sangkuriang bahasa Jawa 5 orang secara gratis di situs-situs tersebut.
-FULL Dialog Naskah Drama Sangkuriang Bahasa Jawa 5 Orang DOWNLOAD ⭐ https://bytlly.com/2uGyp2
-
-Dialog naskah drama Sangkuriang bahasa Jawa 5 orang ditulis dengan bahasa Jawa yang mudah dipahami dan menarik. Dialog ini menggambarkan dengan detail setiap adegan dan peristiwa yang terjadi dalam drama. Dialog ini juga mengandung unsur-unsur humor, romantis, dan dramatis yang membuat drama ini semakin seru dan menyenangkan. Dialog ini juga sesuai dengan kisah asli Sangkuriang yang berasal dari cerita rakyat Jawa.
-
-Jika Anda ingin membuat dialog naskah drama Sangkuriang bahasa Jawa 5 orang sendiri, Anda dapat mengikuti beberapa tips berikut ini:
-
-
-Pilih tokoh-tokoh yang sesuai dengan karakter dan peran dalam drama.
-Tentukan alur cerita yang jelas dan logis.
-Gunakan bahasa Jawa yang baku dan sopan.
-Tambahkan unsur-unsur yang dapat menarik perhatian penonton, seperti humor, romantis, dramatis, atau aksi.
-Pastikan dialog naskah drama Sangkuriang bahasa Jawa 5 orang Anda tidak mengandung unsur-unsur yang melanggar norma atau etika.
-
-
-Dengan membuat dialog naskah drama Sangkuriang bahasa Jawa 5 orang sendiri, Anda dapat mengembangkan kreativitas dan kemampuan menulis Anda. Anda juga dapat menunjukkan apresiasi Anda terhadap budaya dan seni Indonesia. Selain itu, Anda dapat berbagi karya Anda dengan orang lain melalui media sosial atau situs web.
-
-Demikianlah artikel tentang FULL Dialog Naskah Drama Sangkuriang Bahasa Jawa 5 Orang. Semoga artikel ini bermanfaat dan menginspirasi Anda untuk mencoba membuat dialog naskah drama sendiri. Terima kasih telah membaca artikel ini.
-Contoh Dialog Naskah Drama Sangkuriang Bahasa Jawa 5 Orang
-
-Berikut ini adalah contoh dialog naskah drama Sangkuriang bahasa Jawa 5 orang yang dapat Anda simak. Dialog ini diambil dari salah satu situs web yang menyediakan dialog naskah drama secara gratis. Anda dapat mengunduh dialog ini di SlideServe dengan mengklik tautan berikut: FULL Dialog Naskah Drama Sangkuriang Bahasa Jawa 5 Orang .
-
-
-Narator: Cerita ini dimulai ketika Dayang Sumbi sedang menjahit di tepi sungai. Tiba-tiba, benangnya terlepas dan jatuh ke sungai. Dia merasa sedih dan berdoa agar benangnya kembali.
-
-Dayang Sumbi: Ya Tuhan, tolong kembalikan benangku. Aku akan memberikan apapun yang Engkau minta.
-
-Tumang: (muncul dari sungai dengan membawa benang) Halo, Dayang Sumbi. Aku Tumang, seekor anjing peliharaan Batara Guru. Aku mendengar doamu dan aku membawakan benangmu.
-
-Dayang Sumbi: Wah, terima kasih banyak, Tumang. Kamu sangat baik hati. Apa yang bisa aku berikan untukmu?
-
-Tumang: Aku tidak menginginkan apapun darimu. Aku hanya ingin menjadi temanmu.
-
-Dayang Sumbi: Baiklah, aku bersedia menjadi temanmu. Ayo, ikut aku ke rumahku.
-
-Narator: Maka, Dayang Sumbi dan Tumang pun menjadi sahabat karib. Mereka selalu bersama-sama dan bahagia. Suatu hari, Dayang Sumbi merasa kesepian dan ingin memiliki seorang anak.
-
-Dayang Sumbi: Ya Tuhan, tolong berikan aku seorang anak. Aku merindukan kehangatan seorang keluarga.
-
-Tumang: (mendengar doa Dayang Sumbi) Dayang Sumbi, aku ada di sini untukmu. Aku akan memberikan apa yang kamu inginkan.
-
-Dayang Sumbi: Apa maksudmu, Tumang?
-
-Tumang: Aku mencintaimu, Dayang Sumbi. Aku ingin menjadi suamimu dan ayah dari anakmu.
-
-Dayang Sumbi: (terkejut) Apa? Kamu gila, Tumang? Kamu adalah seekor anjing, bukan manusia. Bagaimana mungkin kita bisa bersatu?
-
-Tumang: Percayalah padaku, Dayang Sumbi. Aku bukan anjing biasa. Aku adalah anjing sakti yang dapat berubah menjadi manusia. Aku akan membuktikannya padamu.
-
-Dayang Sumbi: Baiklah, tunjukkan padaku keajaibanmu.
-
-Tumang: (berubah menjadi manusia tampan) Lihatlah, Dayang Sumbi. Inilah wujud asliku. Apakah kamu masih menolakku?
-
-Dayang Sumbi: (terpesona) Wow, kamu sangat tampan, Tumang. Aku tidak menyangka kamu memiliki wajah seperti ini.
-
-Tumang: Jadi, apakah kamu mau menerimaku sebagai suamimu?
-
-Dayang Sumbi: (ragu-ragu) Hmm, aku tidak tahu, Tumang. Ini semua terlalu cepat dan aneh bagiku.
-
-Tumang: Jangan ragu-ragu, Dayang Sumbi. Aku akan mencintaimu sepanjang hidupku dan menjagamu dengan setia. Ayo, mari kita nikah sekarang juga.
-
-Dayang Sumbi: (tersenyum) Baiklah, Tumang. Aku bersedia menikah denganmu.
-
-Narator: Maka, Dayang Sumbi dan Tumang pun menikah secara sederhana di rumah mereka. Mereka hidup bahagia dan harmonis. Tak lama kemudian, Dayang Sumbi hamil dan melahirkan seorang anak laki-laki yang tampan dan kuat. Mereka menamainya Sangkuriang.
-
-
-Analisis Dialog Naskah Drama Sangkuriang Bahasa Jawa 5 Orang
-
-Dialog naskah drama Sangkuriang bahasa Jawa 5 orang di atas memiliki beberapa ciri khas yang dapat kita analisis sebagai berikut:
-
-
-Dialog ini menggunakan bahasa Indonesia yang baku dan sopan.
-Dialog ini mengandung unsur-unsur humor, romantis, dan dramatis yang membuat penonton tertarik dan terhibur.
-Dialog ini sesuai dengan kisah asli Sangkuriang yang berasal dari cerita rakyat Jawa.
-Dialog ini menggunakan kata-kata yang relevan dengan keyword FULL Dialog Naskah Drama Sangkuriang Bahasa Jawa 5 Orang.
-Dialog ini memiliki struktur yang jelas dan logis.
-
-
-Dengan demikian, dialog naskah drama Sangkuriang bahasa Jawa 5 orang ini dapat dikatakan sebagai contoh yang baik dan berkualitas.
-Kelebihan dan Kekurangan Dialog Naskah Drama Sangkuriang Bahasa Jawa 5 Orang
-
-Dialog naskah drama Sangkuriang bahasa Jawa 5 orang memiliki beberapa kelebihan dan kekurangan yang dapat kita evaluasi sebagai berikut:
-
-
-Kelebihan
-
-
-Dialog ini dapat meningkatkan minat dan apresiasi masyarakat terhadap budaya dan seni Jawa.
-Dialog ini dapat menghibur dan mendidik penonton dengan menyajikan kisah yang menarik dan bermakna.
-Dialog ini dapat mengasah kreativitas dan kemampuan berbahasa Jawa para penulis dan pemain drama.
-Dialog ini dapat memperkenalkan legenda Sangkuriang kepada generasi muda yang mungkin belum mengenalnya.
-Dialog ini dapat mempromosikan pariwisata dan keindahan alam Jawa kepada penonton dari luar daerah.
-
-
-Kekurangan
-
-
-Dialog ini mungkin tidak sesuai dengan selera dan pemahaman penonton dari daerah lain yang tidak familiar dengan bahasa Jawa.
-Dialog ini mungkin menimbulkan kontroversi dan kritik dari pihak-pihak yang tidak setuju dengan interpretasi atau penyimpangan dari kisah asli Sangkuriang.
-Dialog ini mungkin mengalami kesulitan dalam hal produksi, distribusi, dan pemasaran karena keterbatasan sumber daya dan sarana.
-Dialog ini mungkin menghadapi persaingan yang ketat dari drama-drama lain yang lebih modern dan populer.
-Dialog ini mungkin kurang mendapatkan dukungan dan apresiasi dari pemerintah dan lembaga-lembaga terkait.
-
-
-Dengan demikian, dialog naskah drama Sangkuriang bahasa Jawa 5 orang ini memiliki kelebihan dan kekurangan yang perlu diperhatikan dan diperbaiki oleh para penulis, pemain, produser, dan penonton drama.
-Tips dan Trik Menulis Dialog Naskah Drama Sangkuriang Bahasa Jawa 5 Orang
-
-Menulis dialog naskah drama Sangkuriang bahasa Jawa 5 orang bukanlah hal yang mudah. Anda perlu memperhatikan beberapa hal agar dialog Anda dapat menarik dan berkualitas. Berikut ini adalah beberapa tips dan trik yang dapat Anda terapkan saat menulis dialog naskah drama Sangkuriang bahasa Jawa 5 orang:
-
-
-Lakukan riset tentang latar belakang dan konteks kisah Sangkuriang. Anda dapat mencari sumber-sumber yang terpercaya dan relevan, seperti buku, artikel, video, atau podcast. Anda juga dapat mengunjungi tempat-tempat yang berkaitan dengan kisah Sangkuriang, seperti Gunung Tangkuban Perahu, Situ Patenggang, atau Candi Ratu Boko.
-Buat outline atau kerangka dialog naskah drama Anda. Anda dapat menentukan tema, tujuan, pesan, dan moral yang ingin Anda sampaikan melalui drama Anda. Anda juga dapat menentukan tokoh-tokoh, setting, alur, konflik, klimaks, dan resolusi yang akan Anda gunakan dalam drama Anda.
-Tulis dialog naskah drama Anda dengan menggunakan bahasa Jawa yang baku dan sopan. Anda dapat menggunakan kamus atau aplikasi penerjemah untuk membantu Anda menulis dalam bahasa Jawa. Anda juga dapat meminta bantuan dari orang yang ahli atau fasih dalam berbahasa Jawa untuk mengoreksi dan memberi masukan pada dialog Anda.
-Tambahkan unsur-unsur yang dapat menarik perhatian dan emosi penonton, seperti humor, romantis, dramatis, atau aksi. Anda dapat menggunakan teknik-teknik seperti metafora, ironi, hiperbola, atau personifikasi untuk membuat dialog Anda lebih hidup dan menarik. Anda juga dapat menggunakan musik, suara, atau efek khusus untuk mendukung dialog Anda.
-Revisi dan edit dialog naskah drama Anda sebelum mempublikasikannya. Anda dapat membaca ulang dialog Anda dengan keras atau merekamnya untuk mendengarkan kembali. Anda juga dapat meminta pendapat atau saran dari orang lain yang kompeten atau berpengalaman dalam bidang drama. Anda juga dapat melakukan uji coba atau simulasi dengan menggunakan para pemain drama untuk melihat hasil akhir dari dialog Anda.
-
-
-Dengan mengikuti tips dan trik di atas, Anda dapat menulis dialog naskah drama Sangkuriang bahasa Jawa 5 orang dengan lebih mudah dan berkualitas. Selamat mencoba dan semoga berhasil!
-Kesimpulan
-
-Dialog naskah drama Sangkuriang bahasa Jawa 5 orang adalah salah satu bentuk karya sastra yang dapat menghibur dan mendidik masyarakat. Dialog ini mengisahkan tentang legenda Sangkuriang, seorang putra yang jatuh cinta dengan ibunya sendiri tanpa menyadarinya. Dialog ini ditulis dengan bahasa Jawa yang baku dan sopan, serta mengandung unsur-unsur humor, romantis, dan dramatis. Dialog ini juga sesuai dengan kisah asli Sangkuriang yang berasal dari cerita rakyat Jawa.
-
-Untuk menulis dialog naskah drama Sangkuriang bahasa Jawa 5 orang, Anda perlu memperhatikan beberapa hal, seperti riset, outline, bahasa, unsur-unsur menarik, dan revisi. Anda juga dapat mengikuti beberapa tips dan trik yang telah kami berikan di atas untuk menulis dialog naskah drama Sangkuriang bahasa Jawa 5 orang dengan lebih mudah dan berkualitas.
-
-Demikianlah artikel tentang FULL Dialog Naskah Drama Sangkuriang Bahasa Jawa 5 Orang. Semoga artikel ini bermanfaat dan menginspirasi Anda untuk mencoba menulis dialog naskah drama sendiri. Terima kasih telah membaca artikel ini.
3cee63e6c2
-
-
\ No newline at end of file
diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Kuldesak (1999) Download DVD Rip.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Kuldesak (1999) Download DVD Rip.md
deleted file mode 100644
index 49d1421a649d8a369862401be04f4e9ff8999b85..0000000000000000000000000000000000000000
--- a/spaces/lincquiQcaudo/Top-20-Diffusion/Kuldesak (1999) Download DVD Rip.md
+++ /dev/null
@@ -1,6 +0,0 @@
-Kuldesak (1999): download DVD rip Download ✒ ✒ ✒ https://bytlly.com/2uGyam
-
-omapflash v4.15 download torrent-adds 11 26 mark davis trina ... full 3gp mobile movie download Kuldesak (1999): download DVD rip ... 4d29de3e1b
-
-
-
diff --git a/spaces/lingbionlp/PhenoTagger-Demo/src/dic_ner.py b/spaces/lingbionlp/PhenoTagger-Demo/src/dic_ner.py
deleted file mode 100644
index fc486e8271760f3850d9d0d29637ade2a24dad61..0000000000000000000000000000000000000000
--- a/spaces/lingbionlp/PhenoTagger-Demo/src/dic_ner.py
+++ /dev/null
@@ -1,164 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-Created on Fri Jun 12 15:05:00 2020
-
-@author: luol2
-"""
-import sys
-import json
-import io
-from src.ssplit_tokenzier import ssplit_token_pos_lemma
-class Trie(object):
- class Node(object):
- def __init__(self):
- self.term = None
- self.next = {}
-
- def __init__(self, terms=[]):
- self.root = Trie.Node()
- for term in terms:
- self.add(term)
-
- def add(self, term):
- node = self.root
- for char in term:
- if not char in node.next:
- node.next[char] = Trie.Node()
- node = node.next[char]
- node.term = term
-
- def match(self, query):
- results = []
- for i in range(len(query)):
- node = self.root
- for j in range(i, len(query)):
- node = node.next.get(query[j])
- if not node:
- break
- if node.term:
- results.append((i, len(node.term)))
- return results
-
- def __repr__(self):
- output = []
- def _debug(output, char, node, depth=0):
- output.append('%s[%s][%s]' % (' '*depth, char, node.term))
- for (key, n) in node.next.items():
- _debug(output, key, n, depth+1)
- _debug(output, '', self.root)
- return '\n'.join(output)
-
-class dic_ont():
-
- def __init__(self, ont_files):
-
- dicin=open(ont_files['dic_file'],'r',encoding='utf-8')
- win_size=50000
- Dic=[]
- print("loading dict!")
- for line in dicin:
- line=line.strip()
- if len(line.split())<=win_size:
- words=line.split()
- for i in range(len(words)):
- if len(words[i])>3 and (not words[i].isupper()):
- words[i]=words[i].lower()
- line=' '.join(words[0:])
- Dic.append(line.strip())
- print("Dic_len:",len(Dic))
- dicin.close()
-
- self.dic_trie = Trie(Dic)
- print("load dic done!")
-
- #load word hpo mapping
- fin_map=open(ont_files['word_hpo_file'],'r',encoding='utf-8')
- self.word_hpo=json.load(fin_map)
- fin_map.close()
-
- #load hpo word mapping
- fin_map=open(ont_files['hpo_word_file'],'r',encoding='utf-8')
- self.hpo_word=json.load(fin_map)
- fin_map.close()
-
- def matching(self, source):
-
- fin=io.StringIO(source)
- fout=io.StringIO()
-
- sent_list=[]
- sent = []
- sent_ori_list=[]
- sent_ori=[]
-
- for line in fin:
- line=line.strip()
- if line=="":
- sent_list.append(sent)
- sent_ori_list.append(sent_ori)
- sent=[]
- sent_ori=[]
- else:
- words=line.split('\t')
- words[1]=words[1].lower()
- sent.append(words[1]) # word lemma
- sent_ori.append(words[0])
- sent=[]
- fin.close()
-
- for k in range(len(sent_list)):
- sent = sent_list[k]
- sentence=' '.join(sent[0:])+" "
- sentence_ori=' '.join(sent_ori_list[k])
-# print('sentence:',sentence)
- result=self.dic_trie.match(sentence)
-# print('result:',result)
- new_result=[]
- for i in range(0,len(result)):
- if result[i][0]==0 and sentence[result[i][1]]==" ":
- new_result.append([result[i][0],result[i][0]+result[i][1]])
- elif result[i][0]>0 and sentence[result[i][0]-1]==' ' and sentence[result[i][0]+result[i][1]]==' ':
- new_result.append([result[i][0],result[i][0]+result[i][1]])
-# print('new result:',new_result)
-
-
-
- if len(new_result)==0:
- fout.write(sentence_ori+'\n\n')
-
- else:
- fout.write(sentence_ori+'\n')
- for ele in new_result:
- entity_text=sentence[ele[0]:ele[1]]
- if entity_text in self.word_hpo.keys():
- hpoid=self.word_hpo[entity_text]
- else:
- print('no id:', entity_text)
- hpoid=['None']
- if ele[0]==0:
- sid="0"
- else:
- temp_sent=sentence[0:ele[0]]
- sid=str(len(temp_sent.rstrip().split(' ')))
- temp_sent=sentence[0:ele[1]]
- eid=str(len(temp_sent.rstrip().split(' '))-1)
-# print(sid,eid,entity_text,hpoid[0])
- fout.write(sid+'\t'+eid+'\t'+entity_text+'\t'+";".join(hpoid)+'\t1.00\n')
- fout.write('\n')
-
- return fout.getvalue()
-
-
-if __name__=='__main__':
-
- ontfiles={'dic_file':'//panfs/pan1/bionlp/lulab/luoling/HPO_project/bioTag/dict/hpo_noabb_lemma.dic',
- 'word_hpo_file':'//panfs/pan1/bionlp/lulab/luoling/HPO_project/bioTag/dict/word_hpoid_map.json',
- 'hpo_word_file':'//panfs/pan1/bionlp/lulab/luoling/HPO_project/bioTag/dict/hpoid_word_map.json'}
- biotag_dic=dic_ont(ontfiles)
- text='Nevoid basal cell carcinoma syndrome (NBCCS) is a hereditary condition transmitted as an autosomal dominant trait with complete penetrance and variable expressivity. The syndrome is characterised by numerous basal cell carcinomas (BCCs), odontogenic keratocysts of the jaws, palmar and/or plantar pits, skeletal abnormalities and intracranial calcifications. In this paper, the clinical features of 37 Italian patients are reviewed. Jaw cysts and calcification of falx cerebri were the most frequently observed anomalies, followed by BCCs and palmar/plantar pits. Similar to the case of African Americans, the relatively low frequency of BCCs in the Italian population is probably due to protective skin pigmentation. A future search based on mutation screening might establish a possible genotype phenotype correlation in Italian patients.'
- ssplit_token=ssplit_token_pos_lemma(text)
-# print(ssplit_token)
- dic_result=biotag_dic.matching(ssplit_token)
- print(dic_result)
-
-
\ No newline at end of file
diff --git a/spaces/litagin/rvc_okiba_TTS/app.py b/spaces/litagin/rvc_okiba_TTS/app.py
deleted file mode 100644
index 9806af7ed245d4aef0a639bafaea2cef031a05d9..0000000000000000000000000000000000000000
--- a/spaces/litagin/rvc_okiba_TTS/app.py
+++ /dev/null
@@ -1,368 +0,0 @@
-import asyncio
-import datetime
-import logging
-import os
-import time
-import traceback
-
-import edge_tts
-import gradio as gr
-import librosa
-import torch
-from fairseq import checkpoint_utils
-
-from config import Config
-from lib.infer_pack.models import (
- SynthesizerTrnMs256NSFsid,
- SynthesizerTrnMs256NSFsid_nono,
- SynthesizerTrnMs768NSFsid,
- SynthesizerTrnMs768NSFsid_nono,
-)
-from rmvpe import RMVPE
-from vc_infer_pipeline import VC
-
-logging.getLogger("fairseq").setLevel(logging.WARNING)
-logging.getLogger("numba").setLevel(logging.WARNING)
-logging.getLogger("markdown_it").setLevel(logging.WARNING)
-logging.getLogger("urllib3").setLevel(logging.WARNING)
-logging.getLogger("matplotlib").setLevel(logging.WARNING)
-
-limitation = os.getenv("SYSTEM") == "spaces"
-
-config = Config()
-
-# Edge TTS
-edge_output_filename = "edge_output.mp3"
-tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices())
-tts_voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list]
-
-# RVC models
-model_root = "weights"
-models = [d for d in os.listdir(model_root) if os.path.isdir(f"{model_root}/{d}")]
-models.sort()
-
-
-def model_data(model_name):
- # global n_spk, tgt_sr, net_g, vc, cpt, version, index_file
- pth_path = [
- f"{model_root}/{model_name}/{f}"
- for f in os.listdir(f"{model_root}/{model_name}")
- if f.endswith(".pth")
- ][0]
- print(f"Loading {pth_path}")
- cpt = torch.load(pth_path, map_location="cpu")
- tgt_sr = cpt["config"][-1]
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
- if_f0 = cpt.get("f0", 1)
- version = cpt.get("version", "v1")
- if version == "v1":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half)
- else:
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
- elif version == "v2":
- if if_f0 == 1:
- net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half)
- else:
- net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"])
- else:
- raise ValueError("Unknown version")
- del net_g.enc_q
- net_g.load_state_dict(cpt["weight"], strict=False)
- print("Model loaded")
- net_g.eval().to(config.device)
- if config.is_half:
- net_g = net_g.half()
- else:
- net_g = net_g.float()
- vc = VC(tgt_sr, config)
- # n_spk = cpt["config"][-3]
-
- index_files = [
- f"{model_root}/{model_name}/{f}"
- for f in os.listdir(f"{model_root}/{model_name}")
- if f.endswith(".index")
- ]
- if len(index_files) == 0:
- print("No index file found")
- index_file = ""
- else:
- index_file = index_files[0]
- print(f"Index file found: {index_file}")
-
- return tgt_sr, net_g, vc, version, index_file, if_f0
-
-
-def load_hubert():
- # global hubert_model
- models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
- ["hubert_base.pt"],
- suffix="",
- )
- hubert_model = models[0]
- hubert_model = hubert_model.to(config.device)
- if config.is_half:
- hubert_model = hubert_model.half()
- else:
- hubert_model = hubert_model.float()
- return hubert_model.eval()
-
-
-def tts(
- model_name,
- speed,
- tts_text,
- tts_voice,
- f0_up_key,
- f0_method,
- index_rate,
- protect,
- filter_radius=3,
- resample_sr=0,
- rms_mix_rate=0.25,
-):
- print("------------------")
- print(datetime.datetime.now())
- print("tts_text:")
- print(tts_text)
- print(f"tts_voice: {tts_voice}, speed: {speed}")
- print(f"Model name: {model_name}")
- print(f"F0: {f0_method}, Key: {f0_up_key}, Index: {index_rate}, Protect: {protect}")
- try:
- if limitation and len(tts_text) > 280:
- print("Error: Text too long")
- return (
- f"Text characters should be at most 280 in this huggingface space, but got {len(tts_text)} characters.",
- None,
- None,
- )
- t0 = time.time()
- if speed >= 0:
- speed_str = f"+{speed}%"
- else:
- speed_str = f"{speed}%"
- asyncio.run(
- edge_tts.Communicate(
- tts_text, "-".join(tts_voice.split("-")[:-1]), rate=speed_str
- ).save(edge_output_filename)
- )
- t1 = time.time()
- edge_time = t1 - t0
- audio, sr = librosa.load(edge_output_filename, sr=16000, mono=True)
- duration = len(audio) / sr
- print(f"Audio duration: {duration}s")
- if limitation and duration >= 20:
- print("Error: Audio too long")
- return (
- f"Audio should be less than 20 seconds in this huggingface space, but got {duration}s.",
- edge_output_filename,
- None,
- )
- f0_up_key = int(f0_up_key)
-
- tgt_sr, net_g, vc, version, index_file, if_f0 = model_data(model_name)
- if f0_method == "rmvpe":
- vc.model_rmvpe = rmvpe_model
- times = [0, 0, 0]
- audio_opt = vc.pipeline(
- hubert_model,
- net_g,
- 0,
- audio,
- edge_output_filename,
- times,
- f0_up_key,
- f0_method,
- index_file,
- # file_big_npy,
- index_rate,
- if_f0,
- filter_radius,
- tgt_sr,
- resample_sr,
- rms_mix_rate,
- version,
- protect,
- None,
- )
- if tgt_sr != resample_sr >= 16000:
- tgt_sr = resample_sr
- info = f"Success. Time: edge-tts: {edge_time}s, npy: {times[0]}s, f0: {times[1]}s, infer: {times[2]}s"
- print(info)
- return (
- info,
- edge_output_filename,
- (tgt_sr, audio_opt),
- )
- except EOFError:
- info = (
- "It seems that the edge-tts output is not valid. "
- "This may occur when the input text and the speaker do not match. "
- "For example, maybe you entered Japanese (without alphabets) text but chose non-Japanese speaker?"
- )
- print(info)
- return info, None, None
- except:
- info = traceback.format_exc()
- print(info)
- return info, None, None
-
-
-print("Loading hubert model...")
-hubert_model = load_hubert()
-print("Hubert model loaded.")
-
-print("Loading rmvpe model...")
-rmvpe_model = RMVPE("rmvpe.pt", config.is_half, config.device)
-print("rmvpe model loaded.")
-
-initial_md = """
-# RVC text-to-speech demo
-
-This is a text-to-speech demo of RVC moe models of [rvc_okiba](https://huggingface.co/litagin/rvc_okiba) using [edge-tts](https://github.com/rany2/edge-tts).
-
-Input text ➡[(edge-tts)](https://github.com/rany2/edge-tts)➡ Speech mp3 file ➡[(RVC)](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI)➡ Final output
-
-This runs on the 🤗 server's cpu, so it may be slow.
-
-Although the models are trained on Japanese voices and intended for Japanese text, they can also be used with other languages with the corresponding edge-tts speaker (but possibly with a Japanese accent).
-
-Input characters are limited to 280 characters, and the speech audio is limited to 20 seconds in this 🤗 space.
-
-[Visit this GitHub repo](https://github.com/litagin02/rvc-tts-webui) for running locally with your models and GPU!
-"""
-
-app = gr.Blocks()
-with app:
- gr.Markdown(initial_md)
- with gr.Row():
- with gr.Column():
- model_name = gr.Dropdown(
- label="Model (all models except man-_ are girl models)",
- choices=models,
- value=models[0],
- )
- f0_key_up = gr.Number(
- label="Tune (+12 = 1 octave up from edge-tts, the best value depends on the models and speakers)",
- value=2,
- )
- with gr.Column():
- f0_method = gr.Radio(
- label="Pitch extraction method (pm: very fast, low quality, rmvpe: a little slow, high quality)",
- choices=["pm", "rmvpe"], # harvest and crepe is too slow
- value="rmvpe",
- interactive=True,
- )
- index_rate = gr.Slider(
- minimum=0,
- maximum=1,
- label="Index rate",
- value=1,
- interactive=True,
- )
- protect0 = gr.Slider(
- minimum=0,
- maximum=0.5,
- label="Protect",
- value=0.33,
- step=0.01,
- interactive=True,
- )
- with gr.Row():
- with gr.Column():
- tts_voice = gr.Dropdown(
- label="Edge-tts speaker (format: language-Country-Name-Gender), make sure the gender matches the model",
- choices=tts_voices,
- allow_custom_value=False,
- value="ja-JP-NanamiNeural-Female",
- )
- speed = gr.Slider(
- minimum=-100,
- maximum=100,
- label="Speech speed (%)",
- value=0,
- step=10,
- interactive=True,
- )
- tts_text = gr.Textbox(label="Input Text", value="これは日本語テキストから音声への変換デモです。")
- with gr.Column():
- but0 = gr.Button("Convert", variant="primary")
- info_text = gr.Textbox(label="Output info")
- with gr.Column():
- edge_tts_output = gr.Audio(label="Edge Voice", type="filepath")
- tts_output = gr.Audio(label="Result")
- but0.click(
- tts,
- [
- model_name,
- speed,
- tts_text,
- tts_voice,
- f0_key_up,
- f0_method,
- index_rate,
- protect0,
- ],
- [info_text, edge_tts_output, tts_output],
- )
- with gr.Row():
- examples = gr.Examples(
- examples_per_page=100,
- examples=[
- ["これは日本語テキストから音声への変換デモです。", "ja-JP-NanamiNeural-Female"],
- [
- "This is an English text to speech conversation demo.",
- "en-US-AriaNeural-Female",
- ],
- ["这是一个中文文本到语音的转换演示。", "zh-CN-XiaoxiaoNeural-Female"],
- ["한국어 텍스트에서 음성으로 변환하는 데모입니다.", "ko-KR-SunHiNeural-Female"],
- [
- "Il s'agit d'une démo de conversion du texte français à la parole.",
- "fr-FR-DeniseNeural-Female",
- ],
- [
- "Dies ist eine Demo zur Umwandlung von Deutsch in Sprache.",
- "de-DE-AmalaNeural-Female",
- ],
- [
- "Tämä on suomenkielinen tekstistä puheeksi -esittely.",
- "fi-FI-NooraNeural-Female",
- ],
- [
- "Это демонстрационный пример преобразования русского текста в речь.",
- "ru-RU-SvetlanaNeural-Female",
- ],
- [
- "Αυτή είναι μια επίδειξη μετατροπής ελληνικού κειμένου σε ομιλία.",
- "el-GR-AthinaNeural-Female",
- ],
- [
- "Esta es una demostración de conversión de texto a voz en español.",
- "es-ES-ElviraNeural-Female",
- ],
- [
- "Questa è una dimostrazione di sintesi vocale in italiano.",
- "it-IT-ElsaNeural-Female",
- ],
- [
- "Esta é uma demonstração de conversão de texto em fala em português.",
- "pt-PT-RaquelNeural-Female",
- ],
- [
- "Це демонстрація тексту до мовлення українською мовою.",
- "uk-UA-PolinaNeural-Female",
- ],
- [
- "هذا عرض توضيحي عربي لتحويل النص إلى كلام.",
- "ar-EG-SalmaNeural-Female",
- ],
- [
- "இது தமிழ் உரையிலிருந்து பேச்சு மாற்ற டெமோ.",
- "ta-IN-PallaviNeural-Female",
- ],
- ],
- inputs=[tts_text, tts_voice],
- )
-
-
-app.launch()
diff --git a/spaces/ltg/chat-nort5/modeling_nort5.py b/spaces/ltg/chat-nort5/modeling_nort5.py
deleted file mode 100644
index 99506e6d09d2753e45ff51797e378d260ff15bd0..0000000000000000000000000000000000000000
--- a/spaces/ltg/chat-nort5/modeling_nort5.py
+++ /dev/null
@@ -1,721 +0,0 @@
-from __future__ import absolute_import, division, print_function, unicode_literals
-
-import math
-from typing import List, Optional, Tuple, Union
-
-import torch
-import torch.nn as nn
-import torch.nn.functional as F
-from transformers.pytorch_utils import softmax_backward_data
-from torch.utils import checkpoint
-
-from configuration_nort5 import NorT5Config
-from transformers.modeling_utils import PreTrainedModel
-from transformers.activations import gelu_new
-from transformers.modeling_outputs import (
- Seq2SeqModelOutput, Seq2SeqLMOutput, BaseModelOutput, BaseModelOutputWithPastAndCrossAttentions
-)
-
-
-class Encoder(nn.Module):
- def __init__(self, config, activation_checkpointing=False):
- super().__init__()
- self.main_input_name = "input_ids"
-
- self.relative_embedding = RelativeEmbedding(config)
- self.layers = nn.ModuleList([EncoderLayer(config) for _ in range(config.num_hidden_layers)])
-
- for i, layer in enumerate(self.layers):
- layer.mlp.mlp[1].weight.data *= math.sqrt(1.0 / (2.0 * (1 + i)))
- layer.mlp.mlp[-2].weight.data *= math.sqrt(1.0 / (2.0 * (1 + i)))
-
- self.activation_checkpointing = activation_checkpointing
-
- def forward(self, hidden_states, attention_mask):
- relative_embedding = self.relative_embedding()
- hidden_states, attention_probs = [hidden_states], []
-
- for layer in self.layers:
- if self.activation_checkpointing:
- hidden_state, attention_p = checkpoint.checkpoint(layer, hidden_states[-1], attention_mask, relative_embedding)
- else:
- hidden_state, attention_p = layer(hidden_states[-1], attention_mask, relative_embedding)
-
- hidden_states.append(hidden_state)
- attention_probs.append(attention_p)
-
- return hidden_states, attention_probs
-
-
-class Decoder(nn.Module):
- def __init__(self, config, activation_checkpointing=False):
- super().__init__()
- self.self_relative_embedding = RelativeEmbedding(config)
- self.cross_relative_embedding = RelativeEmbedding(config)
- self.layers = nn.ModuleList([DecoderLayer(config) for _ in range(config.num_hidden_layers)])
-
- for i, layer in enumerate(self.layers):
- layer.mlp.mlp[1].weight.data *= math.sqrt(1.0 / (2.0 * (1 + i)))
- layer.mlp.mlp[-2].weight.data *= math.sqrt(1.0 / (2.0 * (1 + i)))
-
- self.activation_checkpointing = activation_checkpointing
-
- def forward(self, x, encoder_output, encoder_padding_mask, past_key_values=None):
- self_relative_embedding = self.self_relative_embedding()
- cross_relative_embedding = self.cross_relative_embedding()
-
- if past_key_values is not None:
- autoreg_mask = torch.triu(
- torch.full((x.size(0), x.size(0)), True, device=x.device),
- diagonal=1
- )
- else:
- autoreg_mask = None
-
- # initialize past_key_values with `None` if past does not exist
- if past_key_values is None:
- past_key_values = [None] * len(self.layers)
-
- hidden_states, self_attention_probs, cross_attention_probs, key_value_states = [x], [], [], []
- for layer, past_key_value in zip(self.layers, past_key_values):
- if self.activation_checkpointing:
- hidden_state, self_attention_p, cross_attention_p, key_value_state = checkpoint.checkpoint(layer, hidden_states[-1], autoreg_mask, encoder_output, encoder_padding_mask, self_relative_embedding, cross_relative_embedding, past_key_value=None)
- else:
- hidden_state, self_attention_p, cross_attention_p, key_value_state = layer(hidden_states[-1], autoreg_mask, encoder_output, encoder_padding_mask, self_relative_embedding, cross_relative_embedding, past_key_value=past_key_value)
-
- hidden_states.append(hidden_state)
- self_attention_probs.append(self_attention_p)
- cross_attention_probs.append(cross_attention_p)
- key_value_states.append(key_value_state)
-
- return hidden_states, self_attention_probs, cross_attention_probs, key_value_states
-
-
-class MaskClassifier(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.nonlinearity = nn.Sequential(
- nn.LayerNorm(config.hidden_size, config.layer_norm_eps, elementwise_affine=False),
- nn.Dropout(config.hidden_dropout_prob),
- nn.Linear(config.hidden_size, config.vocab_size)
- )
- self.initialize(config.hidden_size)
-
- def initialize(self, hidden_size):
- std = math.sqrt(2.0 / (5.0 * hidden_size))
- nn.init.trunc_normal_(self.nonlinearity[-1].weight, mean=0.0, std=std, a=-2*std, b=2*std)
- self.nonlinearity[-1].bias.data.zero_()
-
- def forward(self, x):
- x = self.nonlinearity(x)
- return x
-
-
-class EncoderLayer(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.attention = Attention(config, is_cross_attention=False)
- self.mlp = FeedForward(config)
-
- def forward(self, x, padding_mask, relative_embedding):
- attention_output, attention_probs, _ = self.attention(x, x, padding_mask, relative_embedding)
- x = x + attention_output
- x = x + self.mlp(x)
- return x, attention_probs
-
-
-class DecoderLayer(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.self_attention = Attention(config, is_cross_attention=False)
- self.cross_attention = Attention(config, is_cross_attention=True)
- self.mlp = FeedForward(config)
-
- def forward(self, x, autoreg_mask, encoder_output, encoder_padding_mask, self_relative_embedding, cross_relative_embedding, past_key_value=None):
- query_offset = 0
- if past_key_value is not None:
- self_attn_past_key_value = past_key_value[:2]
- cross_attn_past_key_value = past_key_value[2:]
- query_offset = self_attn_past_key_value[0].size(2)
- else:
- self_attn_past_key_value, cross_attn_past_key_value = None, None
-
- x_, self_attention_probs, self_key_value_state = self.self_attention(x, x, autoreg_mask, self_relative_embedding, past_key_value=self_attn_past_key_value, query_offset=query_offset)
- x = x + x_
- x_, cross_attention_probs, cross_key_value_state = self.cross_attention(x, encoder_output, encoder_padding_mask, cross_relative_embedding, past_key_value=cross_attn_past_key_value, query_offset=query_offset)
- x = x + x_
- x = x + self.mlp(x)
-
- return x, self_attention_probs, cross_attention_probs, self_key_value_state + cross_key_value_state
-
-
-class GeGLU(nn.Module):
- def forward(self, x):
- x, gate = x.chunk(2, dim=-1)
- x = x * gelu_new(gate)
- return x
-
-
-class FeedForward(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.mlp = nn.Sequential(
- nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps, elementwise_affine=False),
- nn.Linear(config.hidden_size, 2*config.intermediate_size, bias=False),
- GeGLU(),
- nn.LayerNorm(config.intermediate_size, eps=config.layer_norm_eps, elementwise_affine=False),
- nn.Linear(config.intermediate_size, config.hidden_size, bias=False),
- nn.Dropout(config.hidden_dropout_prob)
- )
- self.initialize(config.hidden_size)
-
- def initialize(self, hidden_size):
- std = math.sqrt(2.0 / (5.0 * hidden_size))
- nn.init.trunc_normal_(self.mlp[1].weight, mean=0.0, std=std, a=-2*std, b=2*std)
- nn.init.trunc_normal_(self.mlp[-2].weight, mean=0.0, std=std, a=-2*std, b=2*std)
-
- def forward(self, x):
- return self.mlp(x)
-
-
-class MaskedSoftmax(torch.autograd.Function):
- @staticmethod
- def forward(self, x, mask, dim):
- self.dim = dim
- if mask is not None:
- x.masked_fill_(mask, float('-inf'))
- x = torch.softmax(x, self.dim)
- if mask is not None:
- x.masked_fill_(mask, 0.0)
- self.save_for_backward(x)
- return x
-
- @staticmethod
- def backward(self, grad_output):
- output, = self.saved_tensors
- input_grad = softmax_backward_data(self, grad_output, output, self.dim, output)
- return input_grad, None, None
-
-
-class Attention(nn.Module):
- def __init__(self, config, is_cross_attention=False):
- super().__init__()
-
- self.config = config
- self.is_cross_attention = is_cross_attention
-
- if config.hidden_size % config.num_attention_heads != 0:
- raise ValueError(f"The hidden size {config.hidden_size} is not a multiple of the number of attention heads {config.num_attention_heads}")
-
- self.hidden_size = config.hidden_size
- self.num_heads = config.num_attention_heads
- self.head_size = config.hidden_size // config.num_attention_heads
-
- self.in_proj_q = nn.Linear(config.hidden_size, config.hidden_size, bias=True)
- self.in_proj_k = nn.Linear(config.hidden_size, config.hidden_size, bias=True)
- self.in_proj_v = nn.Linear(config.hidden_size, config.hidden_size, bias=True)
- self.out_proj = nn.Linear(config.hidden_size, config.hidden_size, bias=True)
-
- self.pre_layer_norm = nn.LayerNorm(config.hidden_size, config.layer_norm_eps, elementwise_affine=False)
- self.post_layer_norm = nn.LayerNorm(config.hidden_size, config.layer_norm_eps, elementwise_affine=True)
-
- position_indices = torch.arange(512, dtype=torch.long).unsqueeze(1) \
- - torch.arange(512, dtype=torch.long).unsqueeze(0)
- position_indices = self.make_log_bucket_position(position_indices, config.position_bucket_size, 512)
- position_indices = config.position_bucket_size - 1 + position_indices
- self.register_buffer("position_indices", position_indices, persistent=True)
-
- self.dropout = nn.Dropout(config.attention_probs_dropout_prob)
- self.scale = 1.0 / math.sqrt(3 * self.head_size)
- self.initialize()
-
- def make_log_bucket_position(self, relative_pos, bucket_size, max_position):
- sign = torch.sign(relative_pos)
- mid = bucket_size // 2
- abs_pos = torch.where((relative_pos < mid) & (relative_pos > -mid), mid - 1, torch.abs(relative_pos).clamp(max=max_position - 1))
- log_pos = torch.ceil(torch.log(abs_pos / mid) / math.log((max_position-1) / mid) * (mid - 1)).int() + mid
- bucket_pos = torch.where(abs_pos <= mid, relative_pos, log_pos * sign).long()
- return bucket_pos
-
- def initialize(self):
- std = math.sqrt(2.0 / (5.0 * self.hidden_size))
- nn.init.trunc_normal_(self.in_proj_q.weight, mean=0.0, std=std, a=-2*std, b=2*std)
- nn.init.trunc_normal_(self.in_proj_k.weight, mean=0.0, std=std, a=-2*std, b=2*std)
- nn.init.trunc_normal_(self.in_proj_v.weight, mean=0.0, std=std, a=-2*std, b=2*std)
- nn.init.trunc_normal_(self.out_proj.weight, mean=0.0, std=std, a=-2*std, b=2*std)
- self.in_proj_q.bias.data.zero_()
- self.in_proj_k.bias.data.zero_()
- self.in_proj_v.bias.data.zero_()
- self.out_proj.bias.data.zero_()
-
- def forward(self, q, kv, attention_mask, relative_embedding, past_key_value=None, query_offset=0):
- key_len, batch_size, _ = kv.size()
- query_len, _, _ = q.size()
-
- if not self.is_cross_attention or past_key_value is None or past_key_value[0].size(1) != kv.size(0):
- kv = self.pre_layer_norm(kv)
- key = self.in_proj_k(kv) # shape: [T, B, D]
- value = self.in_proj_v(kv) # shape: [T, B, D]
- key = key.reshape(key_len, batch_size * self.num_heads, self.head_size).transpose(0, 1) # shape: [BxH, T, D]
- value = value.view(key_len, batch_size * self.num_heads, self.head_size).transpose(0, 1) # shape: [BxH, T, D]
-
- if past_key_value is not None:
- if not self.is_cross_attention:
- key = torch.cat([past_key_value[0].flatten(0, 1), key], dim=1)
- value = torch.cat([past_key_value[1].flatten(0, 1), value], dim=1)
- key_len = key.size(1)
- elif past_key_value[0].size(1) == kv.size(0):
- key = past_key_value[0].flatten(0, 1)
- value = past_key_value[1].flatten(0, 1)
-
- if self.position_indices.size(0) < query_offset + query_len + key_len:
- position_indices = torch.arange(query_offset + query_len + key_len, dtype=torch.long).unsqueeze(1) \
- - torch.arange(query_offset + query_len + key_len, dtype=torch.long).unsqueeze(0)
- position_indices = self.make_log_bucket_position(position_indices, self.config.position_bucket_size, 512)
- position_indices = self.config.position_bucket_size - 1 + position_indices
- self.register_buffer("position_indices", position_indices.to(q.device), persistent=True)
-
- q = self.pre_layer_norm(q)
- query = self.in_proj_q(q) # shape: [T, B, D]
- query = query.reshape(query_len, batch_size * self.num_heads, self.head_size).transpose(0, 1)
-
- attention_scores = torch.bmm(query, key.transpose(1, 2) * self.scale)
-
- query_pos = self.in_proj_q(self.dropout(relative_embedding)) # shape: [2T-1, D]
- query_pos = query_pos.view(-1, self.num_heads, self.head_size) # shape: [2T-1, H, D]
- key_pos = self.in_proj_k(self.dropout(relative_embedding)) # shape: [2T-1, D]
- key_pos = key_pos.view(-1, self.num_heads, self.head_size) # shape: [2T-1, H, D]
-
- query_ = query.view(batch_size, self.num_heads, query_len, self.head_size)
- key_ = key.view(batch_size, self.num_heads, key_len, self.head_size)
-
- attention_c_p = torch.einsum("bhqd,khd->bhqk", query_, key_pos.squeeze(1) * self.scale)
- attention_p_c = torch.einsum("bhkd,qhd->bhqk", key_ * self.scale, query_pos.squeeze(1))
-
- if self.is_cross_attention:
- offset = torch.arange(query_offset, query_offset+query_len, device=key.device).unsqueeze(0) + (~attention_mask).flatten(1, 3).long().sum(1, keepdim=True) # shape: [B, Q]
- position_indices = self.position_indices[:, :key_len].expand(batch_size, -1, -1).gather(dim=1, index=offset.unsqueeze(-1).expand(-1, -1, key_len)).unsqueeze(1).expand(-1, self.num_heads, -1, -1)
- else:
- position_indices = self.position_indices[query_offset:query_len+query_offset, :key_len].expand(batch_size, self.num_heads, -1, -1)
-
- attention_c_p = attention_c_p.gather(3, position_indices)
- attention_p_c = attention_p_c.gather(2, position_indices)
-
- attention_scores = attention_scores.view(batch_size, self.num_heads, query_len, key_len)
- attention_scores.add_(attention_c_p)
- attention_scores.add_(attention_p_c)
-
- attention_probs = MaskedSoftmax.apply(attention_scores, attention_mask, -1)
-
- attention_probs = self.dropout(attention_probs)
- context = torch.bmm(attention_probs.flatten(0, 1), value) # shape: [B*H, Q, D]
- context = context.transpose(0, 1).reshape(context.size(1), -1, self.hidden_size) # shape: [Q, B, H*D]
- context = self.out_proj(context)
- context = self.post_layer_norm(context)
- context = self.dropout(context)
-
- key = key.detach().unflatten(0, (-1, self.num_heads))
- value = value.detach().unflatten(0, (-1, self.num_heads))
-
- return context, attention_probs.detach(), (key, value)
-
-
-class WordEmbedding(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.hidden_size = config.hidden_size
-
- self.word_embedding = nn.Embedding(config.vocab_size, config.hidden_size)
- self.word_layer_norm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps, elementwise_affine=False)
- self.dropout = nn.Dropout(config.hidden_dropout_prob)
-
- self.initialize()
-
- def initialize(self):
- std = math.sqrt(2.0 / (5.0 * self.hidden_size))
- nn.init.trunc_normal_(self.word_embedding.weight, mean=0.0, std=std, a=-2*std, b=2*std)
-
- def forward(self, input_ids):
- return self.dropout(self.word_layer_norm(self.word_embedding(input_ids)))
-
-
-class RelativeEmbedding(nn.Module):
- def __init__(self, config):
- super().__init__()
- self.relative_embedding = nn.Parameter(torch.empty(2 * config.position_bucket_size - 1, config.hidden_size))
- self.relative_layer_norm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
-
- self.initialize(config.hidden_size)
-
- def initialize(self, hidden_size):
- std = math.sqrt(2.0 / (5.0 * hidden_size))
- nn.init.trunc_normal_(self.relative_embedding, mean=0.0, std=std, a=-2*std, b=2*std)
-
- def forward(self):
- return self.relative_layer_norm(self.relative_embedding)
-
-
-#
-# HuggingFace wrappers
-#
-
-class NorT5PreTrainedModel(PreTrainedModel):
- config_class = NorT5Config
- base_model_prefix = "norT5"
- supports_gradient_checkpointing = True
-
- def _set_gradient_checkpointing(self, module, value=False):
- if isinstance(module, Encoder):
- module.activation_checkpointing = value
-
- def _init_weights(self, module):
- pass # everything is already initialized
-
-
-class NorT5Model(NorT5PreTrainedModel):
- def __init__(self, config, add_lm_layer=False, add_decoder=True):
- super().__init__(config)
- self.config = config
-
- self.cls_token_id = config.cls_token_id
- self.sep_token_id = config.sep_token_id
- self.bos_token_id = config.bos_token_id
- self.eos_token_id = config.eos_token_id
- self.pad_token_id = config.pad_token_id
-
- self.embedding = WordEmbedding(config)
- self.encoder = Encoder(config, activation_checkpointing=False)
- self.decoder = Decoder(config, activation_checkpointing=False) if add_decoder else None
- self.classifier = MaskClassifier(config) if add_lm_layer else None
-
- def get_input_embeddings(self):
- return self.embedding.word_embedding
-
- def set_input_embeddings(self, value):
- self.embedding.word_embedding = value
-
- def get_encoder(self):
- class EncoderWrapper:
- def __call__(cls, *args, **kwargs):
- return cls.forward(*args, **kwargs)
-
- def forward(
- cls,
- input_ids: Optional[torch.Tensor] = None,
- attention_mask: Optional[torch.Tensor] = None,
- output_hidden_states: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ):
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- return self.get_encoder_output(
- input_ids, attention_mask, output_hidden_states, output_attentions, return_dict=return_dict
- )
- return EncoderWrapper()
-
- def get_decoder(self):
- return self.get_decoder_output
-
- def set_decoder_special_tokens(self, target_id):
- target_id.masked_fill_(target_id == self.cls_token_id, self.bos_token_id)
- target_id.masked_fill_(target_id == self.sep_token_id, self.eos_token_id)
- return target_id
-
- def _shift_right(self, input_ids):
- shifted_input_ids = input_ids.new_zeros(input_ids.shape)
- shifted_input_ids[..., 1:] = input_ids[..., :-1].clone()
- shifted_input_ids[..., 0] = self.bos_token_id
- shifted_input_ids.masked_fill_(shifted_input_ids == -100, self.pad_token_id)
-
- return shifted_input_ids
-
- def get_encoder_output(
- self,
- input_ids: torch.Tensor = None,
- attention_mask: Optional[torch.Tensor] = None,
- output_hidden_states: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- return_dict = False
- ):
- if input_ids is not None:
- input_shape = input_ids.size()
- else:
- raise ValueError("You have to specify input_ids")
-
- batch_size, seq_length = input_shape
- device = input_ids.device
-
- if attention_mask is None:
- attention_mask = torch.zeros(batch_size, seq_length, dtype=torch.bool, device=device)
- else:
- attention_mask = ~attention_mask.bool()
- attention_mask = attention_mask.unsqueeze(1).unsqueeze(2)
-
- static_embeddings = self.embedding(input_ids.t())
- contextualized_embeddings, attention_probs = self.encoder(static_embeddings, attention_mask)
- contextualized_embeddings = [e.transpose(0, 1) for e in contextualized_embeddings]
- last_layer = contextualized_embeddings[-1]
- contextualized_embeddings = [contextualized_embeddings[0]] + [
- contextualized_embeddings[i] - contextualized_embeddings[i - 1]
- for i in range(1, len(contextualized_embeddings))
- ]
-
- if not return_dict:
- return (
- last_layer,
- *([contextualized_embeddings] if output_hidden_states else []),
- *([attention_probs] if output_attentions else [])
- )
-
- return BaseModelOutput(
- last_hidden_state=last_layer,
- hidden_states=contextualized_embeddings if output_hidden_states else None,
- attentions=attention_probs if output_attentions else None
- )
-
- def get_decoder_output(
- self,
- target_ids: torch.Tensor = None,
- encoder_output: torch.Tensor = None,
- attention_mask: Optional[torch.Tensor] = None,
- past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None,
- use_cache: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- return_dict = False
- ):
- batch_size, seq_length, _ = encoder_output.shape
- device = target_ids.device
-
- if attention_mask is None:
- attention_mask = torch.zeros(batch_size, seq_length, dtype=torch.bool, device=device)
- else:
- attention_mask = ~attention_mask.bool()
- attention_mask = attention_mask.unsqueeze(1).unsqueeze(2)
-
- hidden_states, self_attention_p, cross_attention_p, key_value_states = self.decoder(
- self.embedding(target_ids.t()),
- encoder_output.transpose(0, 1),
- attention_mask,
- past_key_values
- )
-
- hidden_states = [e.transpose(0, 1) for e in hidden_states]
- last_layer = hidden_states[-1]
- hidden_states = [hidden_states[0]] + [
- hidden_states[i] - hidden_states[i - 1]
- for i in range(1, len(hidden_states))
- ]
-
- if not return_dict:
- return (
- last_layer,
- *([key_value_states] if use_cache else []),
- *([hidden_states] if output_hidden_states else []),
- *([self_attention_p] if output_attentions else []),
- *([cross_attention_p] if output_attentions else []),
- )
-
- return BaseModelOutputWithPastAndCrossAttentions(
- last_hidden_state=last_layer,
- past_key_values=key_value_states if use_cache else None,
- hidden_states=hidden_states if output_hidden_states else None,
- attentions=self_attention_p if output_attentions else None,
- cross_attentions=cross_attention_p if output_attentions else None
- )
-
-
- def forward(
- self,
- input_ids: Optional[torch.LongTensor] = None,
- attention_mask: Optional[torch.FloatTensor] = None,
- decoder_input_ids: Optional[torch.LongTensor] = None,
- decoder_attention_mask: Optional[torch.BoolTensor] = None,
- encoder_outputs: Optional[Tuple[Tuple[torch.FloatTensor]]] = None,
- past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None,
- use_cache: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None
- ):
-
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- decoder_input_ids = self.set_decoder_special_tokens(decoder_input_ids)
-
- if encoder_outputs is None:
- encoder_outputs = self.get_encoder_output(
- input_ids, attention_mask, output_hidden_states, output_attentions, return_dict
- )
- elif return_dict and not isinstance(encoder_outputs, BaseModelOutput):
- encoder_outputs = BaseModelOutput(
- last_hidden_state=encoder_outputs[0],
- hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None,
- attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None,
- )
-
- decoder_outputs = self.get_decoder_output(
- decoder_input_ids, encoder_outputs[0], attention_mask, past_key_values, use_cache, output_hidden_states, output_attentions, return_dict
- )
-
- if not return_dict:
- return decoder_outputs + encoder_outputs
-
- return Seq2SeqModelOutput(
- last_hidden_state=decoder_outputs.last_hidden_state,
- past_key_values=decoder_outputs.past_key_values,
- decoder_hidden_states=decoder_outputs.hidden_states,
- decoder_attentions=decoder_outputs.attentions,
- cross_attentions=decoder_outputs.cross_attentions,
- encoder_last_hidden_state=encoder_outputs.last_hidden_state,
- encoder_hidden_states=encoder_outputs.hidden_states,
- encoder_attentions=encoder_outputs.attentions,
- )
-
-
-class NorT5ForConditionalGeneration(NorT5Model):
-
- def __init__(self, config):
- super().__init__(config, add_lm_layer=True)
-
- def forward(
- self,
- input_ids: Optional[torch.LongTensor] = None,
- attention_mask: Optional[torch.FloatTensor] = None,
- decoder_input_ids: Optional[torch.LongTensor] = None,
- decoder_attention_mask: Optional[torch.BoolTensor] = None,
- head_mask: Optional[torch.FloatTensor] = None,
- decoder_head_mask: Optional[torch.FloatTensor] = None,
- cross_attn_head_mask: Optional[torch.Tensor] = None,
- encoder_outputs: Optional[Tuple[Tuple[torch.Tensor]]] = None,
- past_key_values: Optional[Tuple[Tuple[torch.Tensor]]] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- decoder_inputs_embeds: Optional[torch.FloatTensor] = None,
- labels: Optional[torch.LongTensor] = None,
- use_cache: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- output_hidden_states: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ):
- use_cache = use_cache if use_cache is not None else self.config.use_cache
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- if encoder_outputs is None:
- encoder_outputs = self.get_encoder_output(
- input_ids, attention_mask, output_hidden_states, output_attentions, return_dict
- )
- elif return_dict and not isinstance(encoder_outputs, BaseModelOutput):
- encoder_outputs = BaseModelOutput(
- last_hidden_state=encoder_outputs[0],
- hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None,
- attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None,
- )
-
- if labels is not None:
- labels = self.set_decoder_special_tokens(labels)
-
- if labels is not None and decoder_input_ids is None and decoder_inputs_embeds is None:
- decoder_input_ids = self._shift_right(labels)
- elif decoder_input_ids is not None:
- decoder_input_ids = self.set_decoder_special_tokens(decoder_input_ids)
-
- decoder_outputs = self.get_decoder_output(
- decoder_input_ids, encoder_outputs[0], attention_mask, past_key_values, use_cache, output_hidden_states, output_attentions, return_dict
- )
- lm_logits = F.log_softmax(self.classifier(decoder_outputs[0]), dim=-1)
-
- loss = None
- if labels is not None:
- labels.masked_fill_(labels == self.pad_token_id, -100)
- loss_fct = nn.CrossEntropyLoss(ignore_index=-100)
- loss = loss_fct(lm_logits.flatten(0, 1), labels.flatten())
-
- if not return_dict:
- output = (lm_logits,) + decoder_outputs[1:] + encoder_outputs
- return ((loss,) + output) if loss is not None else output
-
- return Seq2SeqLMOutput(
- loss=loss,
- logits=lm_logits,
- past_key_values=decoder_outputs.past_key_values,
- decoder_hidden_states=decoder_outputs.hidden_states,
- decoder_attentions=decoder_outputs.attentions,
- cross_attentions=decoder_outputs.cross_attentions,
- encoder_last_hidden_state=encoder_outputs.last_hidden_state,
- encoder_hidden_states=encoder_outputs.hidden_states,
- encoder_attentions=encoder_outputs.attentions,
- )
-
- def prepare_inputs_for_generation(
- self,
- input_ids,
- past_key_values=None,
- attention_mask=None,
- head_mask=None,
- decoder_head_mask=None,
- cross_attn_head_mask=None,
- use_cache=None,
- encoder_outputs=None,
- **kwargs,
- ):
- if past_key_values is not None:
- input_ids = input_ids[:, -1:]
-
- return {
- "decoder_input_ids": input_ids,
- "past_key_values": past_key_values,
- "encoder_outputs": encoder_outputs,
- "attention_mask": attention_mask,
- "head_mask": head_mask,
- "decoder_head_mask": decoder_head_mask,
- "cross_attn_head_mask": cross_attn_head_mask,
- "use_cache": use_cache,
- }
-
- def prepare_decoder_input_ids_from_labels(self, labels: torch.Tensor):
- return self._shift_right(labels)
-
- def _reorder_cache(self, past_key_values, beam_idx):
- # if decoder past is not included in output
- # speedy decoding is disabled and no need to reorder
- if past_key_values is None:
- print("You might want to consider setting `use_cache=True` to speed up decoding")
- return past_key_values
-
- reordered_decoder_past = ()
- for layer_past_states in past_key_values:
- # get the correct batch idx from layer past batch dim
- # batch dim of `past` is at 2nd position
- reordered_layer_past_states = ()
- for layer_past_state in layer_past_states:
- # need to set correct `past` for each of the four key / value states
-# layer_past_state = layer_past_state.unflatten(0, (-1, self.config.num_attention_heads))
- layer_past_state = layer_past_state.index_select(0, beam_idx.to(layer_past_state.device))
-# layer_past_state = layer_past_state.flatten(0, 1)
- reordered_layer_past_states = reordered_layer_past_states + (layer_past_state,)
-
- assert reordered_layer_past_states[0].shape == layer_past_states[0].shape
- assert len(reordered_layer_past_states) == len(layer_past_states)
-
- reordered_decoder_past = reordered_decoder_past + (reordered_layer_past_states,)
- return reordered_decoder_past
-
-
-class NorT5Encoder(NorT5Model):
- def __init__(self, config):
- super().__init__(config, add_lm_layer=False, add_decoder=True)
-
- def forward(
- self,
- input_ids: Optional[torch.Tensor] = None,
- attention_mask: Optional[torch.Tensor] = None,
- output_hidden_states: Optional[bool] = None,
- output_attentions: Optional[bool] = None,
- return_dict: Optional[bool] = None,
- ):
- return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-
- return self.get_encoder_output(
- input_ids, attention_mask, output_hidden_states, output_attentions, return_dict=return_dict
- )
diff --git a/spaces/m-a-p/Music-Descriptor/app.py b/spaces/m-a-p/Music-Descriptor/app.py
deleted file mode 100644
index d094ed75df938d5793958a2866b2d683f149423e..0000000000000000000000000000000000000000
--- a/spaces/m-a-p/Music-Descriptor/app.py
+++ /dev/null
@@ -1,243 +0,0 @@
-import gradio as gr
-#
-from transformers import Wav2Vec2FeatureExtractor
-from transformers import AutoModel
-import torch
-from torch import nn
-import torchaudio
-import torchaudio.transforms as T
-import logging
-
-import json
-import os
-import re
-
-import pandas as pd
-
-import importlib
-modeling_MERT = importlib.import_module("MERT-v1-95M.modeling_MERT")
-
-from Prediction_Head.MTGGenre_head import MLPProberBase
-# input cr: https://huggingface.co/spaces/thealphhamerc/audio-to-text/blob/main/app.py
-
-
-logger = logging.getLogger("MERT-v1-95M-app")
-logger.setLevel(logging.INFO)
-ch = logging.StreamHandler()
-ch.setLevel(logging.INFO)
-formatter = logging.Formatter(
- "%(asctime)s;%(levelname)s;%(message)s", "%Y-%m-%d %H:%M:%S")
-ch.setFormatter(formatter)
-logger.addHandler(ch)
-
-
-
-inputs = [
- gr.components.Audio(type="filepath", label="Add music audio file"),
- gr.inputs.Audio(source="microphone", type="filepath"),
-]
-live_inputs = [
- gr.Audio(source="microphone",streaming=True, type="filepath"),
-]
-
-title = "One Model for All Music Understanding Tasks"
-description = "An example of using the [MERT-v1-95M](https://huggingface.co/m-a-p/MERT-v1-95M) model as backbone to conduct multiple music understanding tasks with the universal represenation."
-# article = "The tasks include EMO, GS, MTGInstrument, MTGGenre, MTGTop50, MTGMood, NSynthI, NSynthP, VocalSetS, VocalSetT. \n\n More models can be referred at the [map organization page](https://huggingface.co/m-a-p)."
-with open('./README.md', 'r') as f:
- # skip the header
- header_count = 0
- for line in f:
- if '---' in line:
- header_count += 1
- if header_count >= 2:
- break
- # read the rest conent
- article = f.read()
-
-audio_examples = [
- # ["input/example-1.wav"],
- # ["input/example-2.wav"],
-]
-
-df_init = pd.DataFrame(columns=['Task', 'Top 1', 'Top 2', 'Top 3', 'Top 4', 'Top 5'])
-transcription_df = gr.DataFrame(value=df_init, label="Output Dataframe", row_count=(
- 0, "dynamic"), max_rows=30, wrap=True, overflow_row_behaviour='paginate')
-# outputs = [gr.components.Textbox()]
-outputs = transcription_df
-
-df_init_live = pd.DataFrame(columns=['Task', 'Top 1', 'Top 2', 'Top 3', 'Top 4', 'Top 5'])
-transcription_df_live = gr.DataFrame(value=df_init_live, label="Output Dataframe", row_count=(
- 0, "dynamic"), max_rows=30, wrap=True, overflow_row_behaviour='paginate')
-outputs_live = transcription_df_live
-
-# Load the model and the corresponding preprocessor config
-# model = AutoModel.from_pretrained("m-a-p/MERT-v0-public", trust_remote_code=True)
-# processor = Wav2Vec2FeatureExtractor.from_pretrained("m-a-p/MERT-v0-public",trust_remote_code=True)
-model = modeling_MERT.MERTModel.from_pretrained("./MERT-v1-95M")
-processor = Wav2Vec2FeatureExtractor.from_pretrained("./MERT-v1-95M")
-
-device = 'cuda' if torch.cuda.is_available() else 'cpu'
-
-MERT_BEST_LAYER_IDX = {
- 'EMO': 5,
- 'GS': 8,
- 'GTZAN': 7,
- 'MTGGenre': 7,
- 'MTGInstrument': 'all',
- 'MTGMood': 6,
- 'MTGTop50': 6,
- 'MTT': 'all',
- 'NSynthI': 6,
- 'NSynthP': 1,
- 'VocalSetS': 2,
- 'VocalSetT': 9,
-}
-
-MERT_BEST_LAYER_IDX = {
- 'EMO': 5,
- 'GS': 8,
- 'GTZAN': 7,
- 'MTGGenre': 7,
- 'MTGInstrument': 'all',
- 'MTGMood': 6,
- 'MTGTop50': 6,
- 'MTT': 'all',
- 'NSynthI': 6,
- 'NSynthP': 1,
- 'VocalSetS': 2,
- 'VocalSetT': 9,
-}
-CLASSIFIERS = {
-
-}
-
-ID2CLASS = {
-
-}
-
-TASKS = ['GS', 'MTGInstrument', 'MTGGenre', 'MTGTop50', 'MTGMood', 'NSynthI', 'NSynthP', 'VocalSetS', 'VocalSetT','EMO',]
-Regression_TASKS = ['EMO']
-head_dir = './Prediction_Head/best-layer-MERT-v1-95M'
-for task in TASKS:
- print('loading', task)
- with open(os.path.join(head_dir,f'{task}.id2class.json'), 'r') as f:
- ID2CLASS[task]=json.load(f)
- num_class = len(ID2CLASS[task].keys())
- CLASSIFIERS[task] = MLPProberBase(d=768, layer=MERT_BEST_LAYER_IDX[task], num_outputs=num_class)
- CLASSIFIERS[task].load_state_dict(torch.load(f'{head_dir}/{task}.ckpt')['state_dict'])
- CLASSIFIERS[task].to(device)
-
-model.to(device)
-
-
-def model_infernce(inputs):
- waveform, sample_rate = torchaudio.load(inputs)
-
- resample_rate = processor.sampling_rate
-
- # make sure the sample_rate aligned
- if resample_rate != sample_rate:
- # print(f'setting rate from {sample_rate} to {resample_rate}')
- resampler = T.Resample(sample_rate, resample_rate)
- waveform = resampler(waveform)
-
- waveform = waveform.view(-1,) # make it (n_sample, )
- model_inputs = processor(waveform, sampling_rate=resample_rate, return_tensors="pt")
- model_inputs.to(device)
- with torch.no_grad():
- model_outputs = model(**model_inputs, output_hidden_states=True)
-
- # take a look at the output shape, there are 13 layers of representation
- # each layer performs differently in different downstream tasks, you should choose empirically
- all_layer_hidden_states = torch.stack(model_outputs.hidden_states).squeeze()[1:,:,:].unsqueeze(0)
- print(all_layer_hidden_states.shape) # [13 layer, Time steps, 768 feature_dim]
- all_layer_hidden_states = all_layer_hidden_states.mean(dim=2)
-
- task_output_texts = ""
- df = pd.DataFrame(columns=['Task', 'Top 1', 'Top 2', 'Top 3', 'Top 4', 'Top 5'])
- df_objects = []
-
- for task in TASKS:
- num_class = len(ID2CLASS[task].keys())
- if MERT_BEST_LAYER_IDX[task] == 'all':
- logits = CLASSIFIERS[task](all_layer_hidden_states) # [1, 87]
- else:
- logits = CLASSIFIERS[task](all_layer_hidden_states[:, MERT_BEST_LAYER_IDX[task]])
- # print(f'task {task} logits:', logits.shape, 'num class:', num_class)
-
- sorted_idx = torch.argsort(logits, dim = -1, descending=True)[0] # batch =1
- sorted_prob,_ = torch.sort(nn.functional.softmax(logits[0], dim=-1), dim=-1, descending=True)
- # print(sorted_prob)
- # print(sorted_prob.shape)
-
- top_n_show = 5 if num_class >= 5 else num_class
- # task_output_texts = task_output_texts + f"TASK {task} output:\n" + "\n".join([str(ID2CLASS[task][str(sorted_idx[idx].item())])+f', probability: {sorted_prob[idx].item():.2%}' for idx in range(top_n_show)]) + '\n'
- # task_output_texts = task_output_texts + '----------------------\n'
-
- row_elements = [task]
- for idx in range(top_n_show):
- print(ID2CLASS[task])
- # print('id', str(sorted_idx[idx].item()))
- output_class_name = str(ID2CLASS[task][str(sorted_idx[idx].item())])
- output_class_name = re.sub(r'^\w+---', '', output_class_name)
- output_class_name = re.sub(r'^\w+\/\w+---', '', output_class_name)
- # print('output name', output_class_name)
- output_prob = f' {sorted_prob[idx].item():.2%}'
- row_elements.append(output_class_name+output_prob)
- # fill empty elment
- for _ in range(5+1 - len(row_elements)):
- row_elements.append(' ')
- df_objects.append(row_elements)
- df = pd.DataFrame(df_objects, columns=['Task', 'Top 1', 'Top 2', 'Top 3', 'Top 4', 'Top 5'])
- return df
-
-def convert_audio(inputs, microphone):
- if (microphone is not None):
- inputs = microphone
- df = model_infernce(inputs)
- return df
-
-def live_convert_audio(microphone):
- if (microphone is not None):
- inputs = microphone
- df = model_infernce(inputs)
- return df
-
-audio_chunked = gr.Interface(
- fn=convert_audio,
- inputs=inputs,
- outputs=outputs,
- allow_flagging="never",
- title=title,
- description=description,
- article=article,
- examples=audio_examples,
-)
-
-live_audio_chunked = gr.Interface(
- fn=live_convert_audio,
- inputs=live_inputs,
- outputs=outputs_live,
- allow_flagging="never",
- title=title,
- description=description,
- article=article,
- # examples=audio_examples,
- live=True,
-)
-
-
-demo = gr.Blocks()
-with demo:
- gr.TabbedInterface(
- [
- audio_chunked,
- live_audio_chunked,
- ],
- [
- "Audio File or Recording",
- "Live Streaming Music"
- ]
- )
-# demo.queue(concurrency_count=1, max_size=5)
-demo.launch(show_api=False)
\ No newline at end of file
diff --git a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/cross_system.h b/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/cross_system.h
deleted file mode 100644
index f89f3dba8d3c9c07e259e0aba3ed7aed6dfa1f54..0000000000000000000000000000000000000000
--- a/spaces/ma-xu/LIVE/thrust/thrust/system/cuda/detail/cross_system.h
+++ /dev/null
@@ -1,344 +0,0 @@
-/******************************************************************************
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions are met:
- * * Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * * Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- * * Neither the name of the NVIDIA CORPORATION nor the
- * names of its contributors may be used to endorse or promote products
- * derived from this software without specific prior written permission.
- *
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
- * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- *
- ******************************************************************************/
-#pragma once
-
-#include
-#include
-#include
-
-namespace thrust
-{
-namespace cuda_cub {
-
- template
- struct cross_system : execution_policy >
- {
- typedef thrust::execution_policy policy1;
- typedef thrust::execution_policy policy2;
-
- policy1 &sys1;
- policy2 &sys2;
-
- inline __host__ __device__
- cross_system(policy1 &sys1, policy2 &sys2) : sys1(sys1), sys2(sys2) {}
-
- inline __host__ __device__
- cross_system rotate() const
- {
- return cross_system(sys2, sys1);
- }
- };
-
-#if THRUST_CPP_DIALECT >= 2011
- // Device to host.
- template
- THRUST_CONSTEXPR __host__ __device__
- auto direction_of_copy(
- thrust::system::cuda::execution_policy const&
- , thrust::cpp::execution_policy const&
- )
- THRUST_DECLTYPE_RETURNS(
- thrust::detail::integral_constant<
- cudaMemcpyKind, cudaMemcpyDeviceToHost
- >{}
- )
-
- // Host to device.
- template
- THRUST_CONSTEXPR __host__ __device__
- auto direction_of_copy(
- thrust::cpp::execution_policy const&
- , thrust::system::cuda::execution_policy const&
- )
- THRUST_DECLTYPE_RETURNS(
- thrust::detail::integral_constant<
- cudaMemcpyKind, cudaMemcpyHostToDevice
- >{}
- )
-
- // Device to device.
- template