\n \n\nphysics = mujoco.Physics.from_xml_string(swinging_body)\n# Visualize the joint axis.\nscene_option = mujoco.wrapper.core.MjvOption()\nscene_option.flags[enums.mjtVisFlag.mjVIS_JOINT] = True\npixels = physics.render(scene_option=scene_option)\nPIL.Image.fromarray(pixels)\nExplanation: static_model is written in MuJoCo's XML-based MJCF modeling language. The from_xml_string() method invokes the model compiler, which instantiates the library's internal data structures. These can be accessed via the physics object, see below.\nAdding DOFs and simulating, advanced rendering\nThis is a perfectly legitimate model, but if we simulate it, nothing will happen except for time advancing. This is because this model has no degrees of freedom (DOFs). We add DOFs by adding joints to bodies, specifying how they can move with respect to their parents. Let us add a hinge joint and re-render, visualizing the joint axis.\nEnd of explanation\n#@title Making a video {vertical-output: true}\nduration = 2 # (seconds)\nframerate = 30 # (Hz)\n# Visualize the joint axis\nscene_option = mujoco.wrapper.core.MjvOption()\nscene_option.flags[enums.mjtVisFlag.mjVIS_JOINT] = True\n# Simulate and display video.\nframes = []\nphysics.reset() # Reset state and time\nwhile physics.data.time < duration:\n physics.step()\n if len(frames) < physics.data.time * framerate:\n pixels = physics.render(scene_option=scene_option)\n frames.append(pixels)\ndisplay_video(frames, framerate)\nExplanation: The things that move (and which have inertia) are called bodies. The body's child joint specifies how that body can move with respect to its parent, in this case box_and_sphere with respect to the worldbody. \nNote that the body's frame is rotated with an euler directive, and its children, the geoms and the joint, rotate with it. This is to emphasize the local-to-parent-frame nature of position and orientation directives in MJCF.\nLet's make a video, to get a sense of the dynamics and to see the body swinging under gravity.\nEnd of explanation\n#@title Enable transparency and frame visualization {vertical-output: true}\nscene_option = mujoco.wrapper.core.MjvOption()\nscene_option.frame = enums.mjtFrame.mjFRAME_GEOM\nscene_option.flags[enums.mjtVisFlag.mjVIS_TRANSPARENT] = True\npixels = physics.render(scene_option=scene_option)\nPIL.Image.fromarray(pixels)\n#@title Depth rendering {vertical-output: true}\n# depth is a float array, in meters.\ndepth = physics.render(depth=True)\n# Shift nearest values to the origin.\ndepth -= depth.min()\n# Scale by 2 mean distances of near rays.\ndepth /= 2*depth[depth <= 1].mean()\n# Scale to [0, 255]\npixels = 255*np.clip(depth, 0, 1)\nPIL.Image.fromarray(pixels.astype(np.uint8))\n#@title Segmentation rendering {vertical-output: true}\nseg = physics.render(segmentation=True)\n# Display the contents of the first channel, which contains object\n# IDs. The second channel, seg[:, :, 1], contains object types.\ngeom_ids = seg[:, :, 0]\n# Infinity is mapped to -1\ngeom_ids = geom_ids.astype(np.float64) + 1\n# Scale to [0, 1]\ngeom_ids = geom_ids / geom_ids.max()\npixels = 255*geom_ids\nPIL.Image.fromarray(pixels.astype(np.uint8))\n#@title Projecting from world to camera coordinates {vertical-output: true}\n# Get the world coordinates of the box corners\nbox_pos = physics.named.data.geom_xpos['red_box']\nbox_mat = physics.named.data.geom_xmat['red_box'].reshape(3, 3)\nbox_size = physics.named.model.geom_size['red_box']\noffsets = np.array([-1, 1]) * box_size[:, None]\nxyz_local = np.stack(itertools.product(*offsets)).T\nxyz_global = box_pos[:, None] + box_mat @ xyz_local\n# Camera matrices multiply homogenous [x, y, z, 1] vectors.\ncorners_homogeneous = np.ones((4, xyz_global.shape[1]), dtype=float)\ncorners_homogeneous[:3, :] = xyz_global\n# Get the camera matrix.\ncamera = mujoco.Camera(physics)\ncamera_matrix = camera.matrix\n# Project world coordinates into pixel space. See:\n# https://en.wikipedia.org/wiki/3D_projection#Mathematical_formula\nxs, ys, s = camera_matrix @ corners_homogeneous\n# x and y are in the pixel coordinate system.\nx = xs / s\ny = ys / s\n# Render the camera view and overlay the projected corner coordinates.\npixels = camera.render()\nfig, ax = plt.subplots(1, 1)\nax.imshow(pixels)\nax.plot(x, y, '+', c='w')\nax.set_axis_off()\nExplanation: Note how we collect the video frames. Because physics simulation timesteps are generally much smaller than framerates (the default timestep is 2ms), we don't render after each step.\nRendering options\nLike joint visualisation, additional rendering options are exposed as parameters to the render method.\nEnd of explanation\nphysics.model.geom_pos\nExplanation: MuJoCo basics and named indexing\nmjModel\nMuJoCo's mjModel, encapsulated in physics.model, contains the model description, including the default initial state and other fixed quantities which are not a function of the state, e.g. the positions of geoms in the frame of their parent body. The (x, y, z) offsets of the box and sphere geoms, relative their parent body box_and_sphere are given by model.geom_pos:\nEnd of explanation\nhelp(type(physics.model).geom_pos)\nExplanation: Docstrings of attributes provide short descriptions.\nEnd of explanation\nprint('timestep', physics.model.opt.timestep)\nprint('gravity', physics.model.opt.gravity)\nExplanation: The model.opt structure contains global quantities like\nEnd of explanation\nprint(physics.data.time, physics.data.qpos, physics.data.qvel)\nExplanation: mjData\nmjData, encapsulated in physics.data, contains the state and quantities that depend on it. The state is made up of time, generalized positions and generalised velocities. These are respectively data.time, data.qpos and data.qvel. \nLet's print the state of the swinging body where we left it:\nEnd of explanation\nprint(physics.data.geom_xpos)\nExplanation: physics.data also contains functions of the state, for example the cartesian positions of objects in the world frame. The (x, y, z) positions of our two geoms are in data.geom_xpos:\nEnd of explanation\nprint(physics.named.data.geom_xpos)\nExplanation: Named indexing\nThe semantics of the above arrays are made clearer using the named wrapper, which assigns names to rows and type names to columns.\nEnd of explanation\nprint(physics.named.model.geom_pos)\nExplanation: Note how model.geom_pos and data.geom_xpos have similar semantics but very different meanings.\nEnd of explanation\nphysics.named.data.geom_xpos['green_sphere', 'z']\nExplanation: Name strings can be used to index into the relevant quantities, making code much more readable and robust.\nEnd of explanation\nphysics.named.data.qpos['swing']\nExplanation: Joint names can be used to index into quantities in joint space (beginning with the letter q):\nEnd of explanation\n#@title Changing colors using named indexing{vertical-output: true}\nrandom_rgb = np.random.rand(3)\nphysics.named.model.geom_rgba['red_box', :3] = random_rgb\npixels = physics.render()\nPIL.Image.fromarray(pixels)\nExplanation: We can mix NumPy slicing operations with named indexing. As an example, we can set the color of the box using its name (\"red_box\") as an index into the rows of the geom_rgba array.\nEnd of explanation\nphysics.named.data.qpos['swing'] = np.pi\nprint('Without reset_context, spatial positions are not updated:',\n physics.named.data.geom_xpos['green_sphere', ['z']])\nwith physics.reset_context():\n physics.named.data.qpos['swing'] = np.pi\nprint('After reset_context, positions are up-to-date:',\n physics.named.data.geom_xpos['green_sphere', ['z']])\nExplanation: Note that while physics.model quantities will not be changed by the engine, we can change them ourselves between steps.\nSetting the state with reset_context()\nIn order for data quantities that are functions of the state to be in sync with the state, MuJoCo's mj_step1() needs to be called. This is facilitated by the reset_context() context, please see in-depth discussion in Section 2.1 of the dm_control tech report.\nEnd of explanation\n#@title The \"tippe-top\" model{vertical-output: true}\ntippe_top = \n\n \nphysics = mujoco.Physics.from_xml_string(tippe_top)\nPIL.Image.fromarray(physics.render(camera_id='closeup'))\nExplanation: Free bodies: the self-inverting \"tippe-top\"\nA free body is a body with a free joint, with 6 movement DOFs: 3 translations and 3 rotations. We could give our box_and_sphere body a free joint and watch it fall, but let's look at something more interesting. A \"tippe top\" is a spinning toy which flips itself on its head (Wikipedia). We model it as follows:\nEnd of explanation\nprint('positions', physics.data.qpos)\nprint('velocities', physics.data.qvel)\nExplanation: Note several new features of this model definition:\n0. The free joint is added with the <freejoint/> clause, which is similar to <joint type=\"free\"/>, but prohibits unphysical attributes like friction or stiffness.\n1. We use the <option/> clause to set the integrator to the more accurate Runge Kutta 4th order.\n2. We define the floor's grid material inside the <asset/> clause and reference it in the floor geom. \n3. We use an invisible and non-colliding box geom called ballast to move the top's center-of-mass lower. Having a low center of mass is (counter-intuitively) required for the flipping behaviour to occur.\n4. We save our initial spinning state as a keyframe. It has a high rotational velocity around the z-axis, but is not perfectly oriented with the world.\n5. We define a <camera> in our model, and then render from it using the camera_id argument to render().\nLet us examine the state:\nEnd of explanation\n#@title Video of the tippe-top {vertical-output: true}\nduration = 7 # (seconds)\nframerate = 60 # (Hz)\n# Simulate and display video.\nframes = []\nphysics.reset(0) # Reset to keyframe 0 (load a saved state).\nwhile physics.data.time < duration:\n physics.step()\n if len(frames) < (physics.data.time) * framerate:\n pixels = physics.render(camera_id='closeup')\n frames.append(pixels)\ndisplay_video(frames, framerate)\nExplanation: The velocities are easy to interpret, 6 zeros, one for each DOF. What about the length-7 positions? We can see the initial 2cm height of the body; the subsequent four numbers are the 3D orientation, defined by a unit quaternion. These normalized four-vectors, which preserve the topology of the orientation group, are the reason that data.qpos can be bigger than data.qvel: 3D orientations are represented with 4 numbers while angular velocities are 3 numbers.\nEnd of explanation\n#@title Measuring values {vertical-output: true}\ntimevals = []\nangular_velocity = []\nstem_height = []\n# Simulate and save data\nphysics.reset(0)\nwhile physics.data.time < duration:\n physics.step()\n timevals.append(physics.data.time)\n angular_velocity.append(physics.data.qvel[3:6].copy())\n stem_height.append(physics.named.data.geom_xpos['stem', 'z'])\ndpi = 100\nwidth = 480\nheight = 640\nfigsize = (width / dpi, height / dpi)\n_, ax = plt.subplots(2, 1, figsize=figsize, dpi=dpi, sharex=True)\nax[0].plot(timevals, angular_velocity)\nax[0].set_title('angular velocity')\nax[0].set_ylabel('radians / second')\nax[1].plot(timevals, stem_height)\nax[1].set_xlabel('time (seconds)')\nax[1].set_ylabel('meters')\n_ = ax[1].set_title('stem height')\nExplanation: Measuring values from physics.data\nThe physics.data structure contains all of the dynamic variables and intermediate results produced by the simulation. These are expected to change on each timestep. \nBelow we simulate for 2000 timesteps and plot the state and height of the sphere as a function of time.\nEnd of explanation\n#@title chaotic pendulum {vertical-output: true}\nchaotic_pendulum = \n\n \n \n \n \n \n \n \n \n \n \n
\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
\n \n \n\nphysics = mujoco.Physics.from_xml_string(chaotic_pendulum)\npixels = physics.render(height=480, width=640, camera_id=\"fixed\")\nPIL.Image.fromarray(pixels)\nExplanation: Example: A chaotic pendulum\nBelow is a model of a chaotic pendulum, similar to this one in the San Francisco Exploratorium.\nEnd of explanation\n#@title physics vs. rendering: {vertical-output: true}\n# setup\nn_seconds = 6\nframerate = 30 # Hz\nn_frames = int(n_seconds * framerate)\nframes = []\n# set initial state\nwith physics.reset_context():\n physics.named.data.qvel['root'] = 10\n# simulate and record frames\nframe = 0\nsim_time = 0\nrender_time = 0\nn_steps = 0\nfor i in range(n_frames):\n while physics.data.time * framerate < i:\n tic = time.time()\n physics.step()\n sim_time += time.time() - tic\n n_steps += 1\n tic = time.time()\n frame = physics.render(240, 320, camera_id=\"fixed\")\n render_time += time.time() - tic\n frames.append(frame.copy())\n \n# print timing and play video\nprint('simulation: {:6.2f} ms/frame ({:5.0f}Hz)'.format(\n 1000*sim_time/n_steps, n_steps/sim_time))\nprint('rendering: {:6.2f} ms/frame ({:5.0f}Hz)'.format(\n 1000*render_time/n_frames, n_frames/render_time))\nprint('\\n')\n# show video\ndisplay_video(frames, framerate)\nExplanation: Timing\nLet's see a video of it in action while we time the components:\nEnd of explanation\n#@title chaos: sensitvity to pertubation {vertical-output: true}\nPERTURBATION = 1e-7\nSIM_DURATION = 10 # seconds\nNUM_REPEATS = 8\n# preallocate\nn_steps = int(SIM_DURATION / physics.model.opt.timestep)\nsim_time = np.zeros(n_steps)\nangle = np.zeros(n_steps)\nenergy = np.zeros(n_steps)\n# prepare plotting axes\n_, ax = plt.subplots(2, 1, sharex=True)\n# simulate NUM_REPEATS times with slightly different initial conditions\nfor _ in range(NUM_REPEATS):\n # initialize\n with physics.reset_context():\n physics.data.qvel[0] = 10 # root joint velocity\n # perturb initial velocities\n physics.data.qvel[:] += PERTURBATION * np.random.randn(physics.model.nv)\n # simulate\n for i in range(n_steps):\n physics.step()\n sim_time[i] = physics.data.time\n angle[i] = physics.named.data.qpos['root']\n energy[i] = physics.data.energy[0] + physics.data.energy[1]\n # plot\n ax[0].plot(sim_time, angle)\n ax[1].plot(sim_time, energy)\n# finalize plot\nax[0].set_title('root angle')\nax[0].set_ylabel('radian')\nax[1].set_title('total energy') \nax[1].set_ylabel('Joule')\nax[1].set_xlabel('second')\nplt.tight_layout()\nExplanation: Chaos\nThis is a chaotic system, small pertubations in initial conditions accumulate quickly:\nEnd of explanation\n#@title reducing the time-step: {vertical-output: true}\nSIM_DURATION = 10 # (seconds)\nTIMESTEPS = np.power(10, np.linspace(-2, -4, 5))\n# prepare plotting axes\n_, ax = plt.subplots(1, 1)\nfor dt in TIMESTEPS:\n # set timestep, print\n physics.model.opt.timestep = dt\n \n # allocate \n n_steps = int(SIM_DURATION / physics.model.opt.timestep)\n sim_time = np.zeros(n_steps)\n energy = np.zeros(n_steps) \n \n # initialize\n with physics.reset_context():\n physics.data.qvel[0] = 9 # root joint velocity\n # simulate\n print('{} steps at dt = {:2.2g}ms'.format(n_steps, 1000*dt))\n for i in range(n_steps):\n physics.step()\n sim_time[i] = physics.data.time\n energy[i] = physics.data.energy[0] + physics.data.energy[1]\n # plot\n ax.plot(sim_time, energy, label='timestep = {:2.2g}ms'.format(1000*dt))\n \n# finalize plot\nax.set_title('energy')\nax.set_ylabel('Joule')\nax.set_xlabel('second')\nax.legend(frameon=True);\nplt.tight_layout()\nExplanation: Timestep and accuracy\nQ: Why is the energy varying at all? There is no friction or damping, this system should conserve energy. \nA: Because of the discretization of time. \nIf we decrease the timestep we'll get better accuracy, hence better energy conservation:\nEnd of explanation\n#@title increasing the time-step: {vertical-output: true}\nSIM_DURATION = 10 # (seconds)\nTIMESTEPS = np.power(10, np.linspace(-2, -1.5, 7))\n# get plotting axes\nax = plt.gca()\nfor dt in TIMESTEPS:\n # set timestep\n physics.model.opt.timestep = dt\n \n # allocate \n n_steps = int(SIM_DURATION / physics.model.opt.timestep)\n sim_time = np.zeros(n_steps)\n energy = np.zeros(n_steps) * np.nan\n speed = np.zeros(n_steps) * np.nan\n \n # initialize\n with physics.reset_context():\n physics.data.qvel[0] = 11 # root joint velocity\n # simulate\n print('{} steps at dt = {:2.2g}ms'.format(n_steps, 1000*dt))\n for i in range(n_steps):\n try:\n physics.step()\n except BaseException: # raises mujoco.engine.base.PhysicsError\n print('numerical divergence at timestep {}.'.format(i))\n break\n sim_time[i] = physics.data.time\n energy[i] = sum(abs(physics.data.qvel))\n speed[i] = np.linalg.norm(physics.data.qvel)\n # plot\n ax.plot(sim_time, energy, label='timestep = {:2.2g}ms'.format(1000*dt))\n ax.set_yscale('log')\n# finalize plot\nax.set_ybound(1, 1e3)\nax.set_title('energy')\nax.set_ylabel('Joule')\nax.set_xlabel('second')\nax.legend(frameon=True, loc='lower right');\nplt.tight_layout()\nExplanation: Timestep and divergence\nWhen we increase the time step, the simulation quickly diverges\nEnd of explanation\n#@title 'box_and_sphere' free body: {vertical-output: true}\nfree_body_MJCF = \n\n \n \n \n \n \n \n \n
\n \n \n \n \n \n
\n \n\nphysics = mujoco.Physics.from_xml_string(free_body_MJCF)\npixels = physics.render(400, 600, \"fixed\") \nPIL.Image.fromarray(pixels)\n#@title contacts in slow-motion: (0.25x){vertical-output: true}\nn_frames = 200\nheight = 240\nwidth = 320\nframes = np.zeros((n_frames, height, width, 3), dtype=np.uint8)\n# visualize contact frames and forces, make body transparent\noptions = mujoco.wrapper.core.MjvOption()\nmujoco.wrapper.core.mjlib.mjv_defaultOption(options.ptr)\noptions.flags[enums.mjtVisFlag.mjVIS_CONTACTPOINT] = True\noptions.flags[enums.mjtVisFlag.mjVIS_CONTACTFORCE] = True\noptions.flags[enums.mjtVisFlag.mjVIS_TRANSPARENT] = True\n# tweak scales of contact visualization elements\nphysics.model.vis.scale.contactwidth = 0.1\nphysics.model.vis.scale.contactheight = 0.03\nphysics.model.vis.scale.forcewidth = 0.05\nphysics.model.vis.map.force = 0.3\n# random initial rotational velocity:\nwith physics.reset_context():\n physics.data.qvel[3:6] = 5*np.random.randn(3)\n# simulate and render\nfor i in range(n_frames):\n while physics.data.time < i/120.0: #1/4x real time\n physics.step()\n frames[i] = physics.render(height, width, camera_id=\"track\", scene_option=options)\n# show video\ndisplay_video(frames)\nExplanation: Contacts\nEnd of explanation\n#@title contact-related quantities: {vertical-output: true}\nn_steps = 499\n# allocate\nsim_time = np.zeros(n_steps)\nncon = np.zeros(n_steps)\nforce = np.zeros((n_steps,3))\nvelocity = np.zeros((n_steps, physics.model.nv))\npenetration = np.zeros(n_steps)\nacceleration = np.zeros((n_steps, physics.model.nv))\nforcetorque = np.zeros(6)\n# random initial rotational velocity:\nwith physics.reset_context():\n physics.data.qvel[3:6] = 2*np.random.randn(3)\n# simulate and save data\nfor i in range(n_steps):\n physics.step()\n sim_time[i] = physics.data.time\n ncon[i] = physics.data.ncon\n velocity[i] = physics.data.qvel[:]\n acceleration[i] = physics.data.qacc[:]\n # iterate over active contacts, save force and distance\n for j,c in enumerate(physics.data.contact):\n mjlib.mj_contactForce(physics.model.ptr, physics.data.ptr, \n j, forcetorque)\n force[i] += forcetorque[0:3]\n penetration[i] = min(penetration[i], c.dist)\n # we could also do \n # force[i] += physics.data.qfrc_constraint[0:3]\n # do you see why?\n \n# plot\n_, ax = plt.subplots(3, 2, sharex=True, figsize=(7, 10))\nlines = ax[0,0].plot(sim_time, force)\nax[0,0].set_title('contact force')\nax[0,0].set_ylabel('Newton')\nax[0,0].legend(iter(lines), ('normal z', 'friction x', 'friction y'));\nax[1,0].plot(sim_time, acceleration)\nax[1,0].set_title('acceleration')\nax[1,0].set_ylabel('(meter,radian)/s/s')\nax[2,0].plot(sim_time, velocity)\nax[2,0].set_title('velocity')\nax[2,0].set_ylabel('(meter,radian)/s')\nax[2,0].set_xlabel('second')\nax[0,1].plot(sim_time, ncon)\nax[0,1].set_title('number of contacts')\nax[0,1].set_yticks(range(6))\nax[1,1].plot(sim_time, force[:,0])\nax[1,1].set_yscale('log')\nax[1,1].set_title('normal (z) force - log scale')\nax[1,1].set_ylabel('Newton')\nz_gravity = -physics.model.opt.gravity[2]\nmg = physics.named.model.body_mass[\"box_and_sphere\"] * z_gravity\nmg_line = ax[1,1].plot(sim_time, np.ones(n_steps)*mg, label='m*g', linewidth=1)\nax[1,1].legend()\n \nax[2,1].plot(sim_time, 1000*penetration)\nax[2,1].set_title('penetration depth')\nax[2,1].set_ylabel('millimeter')\nax[2,1].set_xlabel('second')\n \nplt.tight_layout()\nExplanation: Analysis of contact forces\nEnd of explanation\n#@title tangential friction and slope: {vertical-output: true}\nMJCF = \n\n \n \n \n \n \n \n \n \n \n \n \n \n \n
\n \n \n
\n
\n \n \n
\n \n \n\n# load \nphysics = mujoco.Physics.from_xml_string(MJCF)\nn_frames = 60\nheight = 480\nwidth = 480\nvideo = np.zeros((n_frames, height, width, 3), dtype=np.uint8)\n# simulate and render\nphysics.reset()\nfor i in range(n_frames):\n while physics.data.time < i/30.0:\n physics.step()\n video[i] = physics.render(height, width, \"y\")\ndisplay_video(video)\nExplanation: Friction\nEnd of explanation\n#@title bat and piñata: {vertical-output: true}\nMJCF = \n\n \n \n \n \n \n \n \n \n \n \n \n
\n \n \n
\n \n
\n \n \n \n \n
\n \n \n \n \n \n \n \n \n \n \n \n \n\nphysics = mujoco.Physics.from_xml_string(MJCF)\nPIL.Image.fromarray(physics.render(480, 480, \"fixed\") )\n#@title actuated bat and passive piñata: {vertical-output: true}\nn_frames = 180\nheight = 240\nwidth = 320\nvideo = np.zeros((n_frames, height, width, 3), dtype=np.uint8)\n# constant actuator signal\nwith physics.reset_context():\n physics.named.data.ctrl[\"my_motor\"] = 20\n# simulate and render\nfor i in range(n_frames):\n while physics.data.time < i/30.0:\n physics.step()\n video[i] = physics.render(height, width, \"fixed\")\ndisplay_video(video)\nExplanation: Actuators and tendons\nEnd of explanation\n#@title actuated piñata: {vertical-output: true}\nn_frames = 300\nheight = 240\nwidth = 320\nvideo = np.zeros((n_frames, height, width, 3), dtype=np.uint8)\n# constant actuator signal\nphysics.reset()\n# gravity compensation\nmg = -(physics.named.model.body_mass[\"box_and_sphere\"] * \n physics.model.opt.gravity[2])\nphysics.named.data.xfrc_applied[\"box_and_sphere\", 2] = mg\n# One Newton in the x direction\nphysics.named.data.xfrc_applied[\"box_and_sphere\", 0] = 1\n# simulate and render\nfor i in range(n_frames):\n while physics.data.time < i/30.0:\n physics.step()\n video[i] = physics.render(height, width)\ndisplay_video(video)\nExplanation: Let's ignore the actuator and apply forces directly to the body:\nEnd of explanation\n#@title virtual spring-damper: {vertical-output: true}\nMJCF = \n\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
\n \n \n \n \n \n \n \n \n \n \n \n \n \n \n
\n \n \n \n \n \n\nphysics = mujoco.Physics.from_xml_string(MJCF)\n# virtual spring coefficient\nKP = 3\n# prepare simulation\njac_pos = np.zeros((3, physics.model.nv))\njac_rot = np.zeros((3, physics.model.nv))\nn_frames = 50\nheight = 320 \nwidth = 320\nvideo = np.zeros((n_frames, height, 2*width, 3), dtype=np.uint8)\n# place target in random location\nwith physics.reset_context():\n target_pos = np.random.rand(3)*.5\n target_pos[:2] -= .25\n physics.named.model.geom_pos[\"target\"][:] = target_pos\n physics.named.model.geom_sameframe[\"target\"] = 0\n# simulate and render\nfor i in range(n_frames):\n while physics.data.time < i/15.0:\n \n # get Jacobian of fingertip position\n mjlib.mj_jacGeom(physics.model.ptr, \n physics.data.ptr, \n jac_pos, \n jac_rot, \n physics.model.name2id('fingertip', 'geom'))\n # multiply the jacobian by error to get vector in joint space\n err = (physics.named.data.geom_xpos[\"target\"] - \n physics.named.data.geom_xpos[\"fingertip\"])\n jnt_err = np.dot(err, jac_pos)\n \n # set virutal spring force\n physics.data.qfrc_applied[:] = KP * jnt_err\n \n # step\n physics.step()\n video[i] = np.hstack((physics.render(height, width, \"y\"),\n physics.render(height, width, \"x\")))\ndisplay_video(video, framerate=24)\nExplanation: Kinematic Jacobians\nA Jacobian is a derivative matrix of a vector-valued function. MuJoCo computes the Jacobians of all transformations between joint space and Cartesian space.\nBelow we use the Jacobian of the end effector position to create a virtual spring to some random target.\nEnd of explanation"}}},{"rowIdx":2971,"cells":{"Unnamed: 0":{"kind":"number","value":2971,"string":"2,971"},"text_prompt":{"kind":"string","value":"Given the following text description, write Python code to implement the functionality described below step by step\nDescription:\n Functions\nMaking reusable blocks of code.\nStarting point\nStep1: What about for $a = 2$, $b = 8$, and $c = 1$?\nStep3: Functions\nStep5: Observe how this function works.\nStep7: Summarize\nStep11: Summarize\nHow do you get information into the function?\nYou get information into the function via arguments defined in the function name.\nModify\nAlter the code below so it takes two arguments (a and b) and prints out both of them.\nStep13: Predict\nWhat does b=5 let you do?\nStep15: b=5 allows you to define a default value for the argument.\nHow do you get information out of a function?\nStep18: Summarize\nHow do you get information out of the function?\nBy putting return at the end of a function, you can capture output.\nModify\nAlter the program below so it returns the calculated value.\nStep20: To return multiple values, use commas\nStep21: Implement\nWrite a function that uses the quadratic equation to find both roots of a polynomial for any $a$, $b$, and $c$."},"code_prompt":{"kind":"string","value":"Python Code:\n## Code here\nimport math\n(-4 + math.sqrt(4**2 - 4*1*3))/(2*1)\nExplanation: Functions\nMaking reusable blocks of code.\nStarting point:\nIn this exercise, we're going to calculate one of the roots from the quadratic formula:\n$r_{p} = \\frac{-b + \\sqrt{b^{2} - 4ac}}{2a}$\nDetermine $r_{p}$ for $a = 1$, $b=4$, and $c=3$.\nEnd of explanation\n## Code here \n# gonna make this saner\na = 2\nb = 8\nc = 1\n(-b + math.sqrt(b**2 - 4*a*c))/(2*a)\nExplanation: What about for $a = 2$, $b = 8$, and $c = 1$?\nEnd of explanation\ndef square(x):\n \n This function will square x.\n \n \n return x*x\ns = square(5)\nhelp(square)\nExplanation: Functions:\nCode can be organized into functions. \nFunctions allow you to wrap a piece of code and use it over and over. \nMakes code reusable\nAvoid having to re-type the same code (each time, maybe making an error)\nObserve how this function works.\nEnd of explanation\nimport math\ndef hypotenuse(y,theta):\n \n Return a hypotenuse given y and theta in radians.\n \n \n return math.sin(theta)*y\n \nh = hypotenuse(1,math.pi/2)\nprint(h)\n \nExplanation: Observe how this function works.\nEnd of explanation\ndef some_function(ARGUMENT):\n \n Print out ARGUMENT.\n \n return 1\n \n print(ARGUMENT)\n \nsome_function(10)\nsome_function(\"test\")\nExplanation: Summarize:\nWhat is the syntax for defining a function?\n```python\ndef FUNCTION_NAME(ARGUMENT_1,ARGUMENT_2,...,ARGUMENT_N):\n \n Description of function.\n \ndo_stuff\ndo_other_stuff\ndo_more_stuff\nreturn VALUE_1, VALUE_2, ... VALUE_N\n```\nHow do you get information into a function?\nEnd of explanation\ndef some_function(a):\n \n print out a\n \n print(a)\ndef some_function(a,b):\n \n print out a and b\n \n print(a,b)\nExplanation: Summarize\nHow do you get information into the function?\nYou get information into the function via arguments defined in the function name.\nModify\nAlter the code below so it takes two arguments (a and b) and prints out both of them.\nEnd of explanation\ndef some_function(a,b=5,c=7):\n \n Print a and b.\n \n \n print(a,b,c)\n \nsome_function(1,c=2)\nsome_function(1,2)\nsome_function(a=5,b=4)\nExplanation: Predict\nWhat does b=5 let you do?\nEnd of explanation\ndef some_function(a):\n \n Multiply a by 5.\n \n \n return a*5\nprint(some_function(2))\nprint(some_function(80.5))\nx = some_function(5)\nprint(x)\nExplanation: b=5 allows you to define a default value for the argument.\nHow do you get information out of a function?\nEnd of explanation\ndef some_function(a,b):\n \n Sum up a and b.\n \n \n v = a + b\n \n return v\nv = some_function(1,2)\nprint(v)\ndef some_function(a,b):\n \n Sum up a and b.\n \n \n v = a + b\n \n return v\nExplanation: Summarize\nHow do you get information out of the function?\nBy putting return at the end of a function, you can capture output.\nModify\nAlter the program below so it returns the calculated value.\nEnd of explanation\ndef some_function(a):\n \n Multiply a by 5 and 2.\n \n \n return a*5, a*2\nx, y = some_function(5)\nprint(x)\nExplanation: To return multiple values, use commas:\nEnd of explanation\n## Code here\ndef get_root(a,b,c):\n return (-b + math.sqrt(b**2 - 4*a*c))/(2*a)\nprint(get_root(1,4,3))\nprint(get_root(2,8,1))\nExplanation: Implement\nWrite a function that uses the quadratic equation to find both roots of a polynomial for any $a$, $b$, and $c$.\nEnd of explanation"}}},{"rowIdx":2972,"cells":{"Unnamed: 0":{"kind":"number","value":2972,"string":"2,972"},"text_prompt":{"kind":"string","value":"Given the following text description, write Python code to implement the functionality described below step by step\nDescription:\n Cppyy Tutorial\n(Modified from Enrico Guiraud's cppyy tutorial.)\nThis tutorial introduces the basic concepts for using cppyy, the automatic Python-C++ generator. To install cppyy on your system, simply run (this may take a while as it will pull in and compile a custom version of LLVM)\nStep2: There are three layers to cppyy\nStep3: We now have a class 'Integer1'. Note that this class exists on the C++ side and has to follow C++ rules. For example, whereas in Python we can simply redefine a class, we can't do that in C++. Therefore, we will number the Integer classes as we go along, to be able to extend the example as we see fit.\nPython classes are constructed dynamically. It doesn't matter where or how they are defined, whether in a Python script, \"compiled\" into a C extension module, or otherwise. Cppyy takes advantage of this fact to generate bindings on-the-fly. This leads to performance advantages for large libraries with thousands of C++ classes; general distribution advantages since, other than the module cppyy itself, no code depends on any specific version of Python; and it enablers, through the Cling backend, interactive access to C++.\nTo access our first class, find it in gbl, the global namespace\nStep4: Namespaces have simularities to modules, so we could have imported the class as well.\nBound C++ classes are first-class Python object. We can instantiate them, use normal Python introspection tools, call help(), they raise Python exceptions on failure, manage memory through Python's ref-counting and garbage collection, etc., etc. Furthermore, we can use them in conjunction with other C++ classes.\nStep5: Hum, that doesn't look very pretty. However, since Integer1 is now a Python class we can decorate it, with a custom __repr__ function (we'll punt on the vector and instead convert it to a Python list for printing).\nStep8: Pythonizations\nAs we have seen so far, automatic bindings are simple and easy to use. However, even though they are first-class Python objects, they do have some rough C++ edges left. There is some pythonization going on in the background\nStep10: Class Hierarchies\nBoth Python and C++ support multiple programming paradigms, making it relatively straightforward to map language features (e.g. class inheritance, free functions, etc.); many other features can be cleanly hidden, merely because the syntax is very similar or otherwise natural (e.g. overloading, abstract classes, static data members, etc.); and yet others map gracefully because their semantic intent is expressed clearly in the syntax (e.g. smart pointers, STL, etc.).\nThe following presents a range of C++ features that map naturally, and exercises them in Python.\nStep12: Modern C++\nAs C++ matures, more and more semantic intent (such as object ownership) is expressed in the syntax. This not for the benefit of bindings generators, but for the poor programmer having to read the code. Still, a bindings generator benefits greatly from this increased expression."},"code_prompt":{"kind":"string","value":"Python Code:\nimport cppyy\nExplanation: Cppyy Tutorial\n(Modified from Enrico Guiraud's cppyy tutorial.)\nThis tutorial introduces the basic concepts for using cppyy, the automatic Python-C++ generator. To install cppyy on your system, simply run (this may take a while as it will pull in and compile a custom version of LLVM):\n$ pip install cppyy\nFor further details on the installation, as well as the location of binary wheels, see:\n http://cppyy.readthedocs.io/en/latest/installation.html\nTo start, import module cppyy. All functionality, including using bound classes, always starts at this top-level.\nEnd of explanation\ncppyy.cppdef(\nclass Integer1 {\npublic:\n Integer1(int i) : m_data(i) {}\n int m_data;\n};)\nExplanation: There are three layers to cppyy: at the top there are the module 'gbl' (the global namespace), a range of helper functions, and a set of sub-modules (such as py) that serve specific purposes. Let's start with defining a little helper class in C++ using the helper function cppdef, to make the example more interesting:\nEnd of explanation\nprint(cppyy.gbl.Integer1)\nExplanation: We now have a class 'Integer1'. Note that this class exists on the C++ side and has to follow C++ rules. For example, whereas in Python we can simply redefine a class, we can't do that in C++. Therefore, we will number the Integer classes as we go along, to be able to extend the example as we see fit.\nPython classes are constructed dynamically. It doesn't matter where or how they are defined, whether in a Python script, \"compiled\" into a C extension module, or otherwise. Cppyy takes advantage of this fact to generate bindings on-the-fly. This leads to performance advantages for large libraries with thousands of C++ classes; general distribution advantages since, other than the module cppyy itself, no code depends on any specific version of Python; and it enablers, through the Cling backend, interactive access to C++.\nTo access our first class, find it in gbl, the global namespace:\nEnd of explanation\n# for convenience, bring Integer1 into __main__\nfrom cppyy.gbl import Integer1\n# create a C++ Integer1 object\ni = Integer1(42)\n# use Python inspection\nprint(\"Variable has an 'm_data' data member?\", hasattr(i, 'm_data') and 'Yes!' or 'No!')\nprint(\"Variable is an instance of int?\", isinstance(i, int) and 'Yes!' or 'No!')\nprint(\"Variable is an instance of Integer1?\", isinstance(i, Integer1) and 'Yes!' or 'No!')\n# pull in the STL vector class\nfrom cppyy.gbl.std import vector\n# create a vector of Integer1 objects; note how [] instantiates the template and () instantiates the class\nv = vector[Integer1]()\n# populate it\nv += [Integer1(j) for j in range(10)]\n# display our vector\nprint(v)\nExplanation: Namespaces have simularities to modules, so we could have imported the class as well.\nBound C++ classes are first-class Python object. We can instantiate them, use normal Python introspection tools, call help(), they raise Python exceptions on failure, manage memory through Python's ref-counting and garbage collection, etc., etc. Furthermore, we can use them in conjunction with other C++ classes.\nEnd of explanation\n# add a custom conversion for printing\nInteger1.__repr__ = lambda self: repr(self.m_data)\n# now try again (note the conversion of the vector to a Python list)\nprint(list(v))\nExplanation: Hum, that doesn't look very pretty. However, since Integer1 is now a Python class we can decorate it, with a custom __repr__ function (we'll punt on the vector and instead convert it to a Python list for printing).\nEnd of explanation\n# create an Integer2 class, living in namespace Math\ncppyy.cppdef(\nnamespace Math {\n class Integer2 : public Integer1 {\n public:\n using Integer1::Integer1;\n operator int() { return m_data; }\n };\n})\n# prepare a pythonizor\ndef pythonizor(klass, name):\n # A pythonizor receives the freshly prepared bound C++ class, and a name stripped down to\n # the namespace the pythonizor is applied. Also accessible are klass.__name__ (for the\n # Python name) and klass.__cpp_name__ (for the C++ name)\n if name == 'Integer2':\n klass.__repr__ = lambda self: repr(self.m_data)\n# install the pythonizor as a callback on namespace 'Math' (default is the global namespace)\ncppyy.py.add_pythonization(pythonizor, 'Math')\n# when we next get the Integer2 class, it will have been decorated\nInteger2 = cppyy.gbl.Math.Integer2 # first time a new namespace is used, it can not be imported from\nv2 = vector[Integer2]()\nv2 += [Integer2(j) for j in range(10)]\n# now test the effect of the pythonizor:\nprint(list(v2))\n# in addition, Integer2 has a conversion function, which is automatically recognized and pythonized\ni2 = Integer2(13)\nprint(\"Converted Integer2 variable:\", int(i2))\n# continue the decoration on the C++ side, by adding an operator+ overload\ncppyy.cppdef(\nnamespace Math {\n Integer2 operator+(const Integer2& left, const Integer1& right) {\n return left.m_data + right.m_data;\n }\n})\n# now use that fresh decoration (it will be located and bound on use):\nk = i2 + i\nprint(k, i2.m_data + i.m_data)\nExplanation: Pythonizations\nAs we have seen so far, automatic bindings are simple and easy to use. However, even though they are first-class Python objects, they do have some rough C++ edges left. There is some pythonization going on in the background: the vector, for example, played nice with += and the list conversion. But for presenting your own classes to end-users, specific pythonizations are desirable. To have this work correctly with lazy binding, a callback-based API exists.\nNow, it's too late for Integer1, so let's create Integer2, which lives in a namespace and in addition has a conversion feature.\nEnd of explanation\n# create some animals to play with\ncppyy.cppdef(\nnamespace Zoo {\n enum EAnimal { eLion, eMouse };\n \n class Animal {\n public:\n virtual ~Animal() {}\n virtual std::string make_sound() = 0;\n };\n \n class Lion : public Animal {\n public:\n virtual std::string make_sound() { return s_lion_sound; }\n static std::string s_lion_sound;\n };\n std::string Lion::s_lion_sound = \"growl!\";\n class Mouse : public Animal {\n public:\n virtual std::string make_sound() { return \"peep!\"; }\n };\n Animal* release_animal(EAnimal animal) {\n if (animal == eLion) return new Lion{};\n if (animal == eMouse) return new Mouse{};\n return nullptr;\n }\n std::string identify_animal(Lion*) {\n return \"the animal is a lion\";\n }\n std::string identify_animal(Mouse*) {\n return \"the animal is a mouse\";\n }\n}\n)\n# pull in the Zoo (after which we can import from it)\nZoo = cppyy.gbl.Zoo\n# pythonize the animal release function to take ownership on return\nZoo.release_animal.__creates__ = True\n# abstract base classes can not be instantiated:\ntry:\n animal = Zoo.Animal()\nexcept TypeError as e:\n print('Failed:', e, '\\n')\n# derived classes can be inspected in the same class hierarchy on the Python side\nprint('A Lion is an Animal?', issubclass(Zoo.Lion, Zoo.Animal) and 'Yes!' or 'No!', '\\n')\n# returned pointer types are auto-casted to the lowest known derived type:\nmouse = Zoo.release_animal(Zoo.eMouse)\nprint('Type of mouse:', type(mouse))\nlion = Zoo.release_animal(Zoo.eLion)\nprint('Type of lion:', type(lion), '\\n')\n# as pythonized, the ownership of the return value from release_animal is Python's\nprint(\"Does Python own the 'lion'?\", lion.__python_owns__ and 'Yes!' or 'No!')\nprint(\"Does Python own the 'mouse'?\", mouse.__python_owns__ and 'Yes!' or 'No!', '\\n')\n# virtual functions work as expected:\nprint('The mouse says:', mouse.make_sound())\nprint('The lion says:', lion.make_sound(), '\\n')\n# now change what the lion says through its static (class) variable\nZoo.Lion.s_lion_sound = \"mooh!\"\nprint('The lion says:', lion.make_sound(), '\\n')\n# overloads are combined into a single function on the Python side and resolved dynamically\nprint(\"Identification of \\'mouse\\':\", Zoo.identify_animal(mouse))\nprint(\"Identification of \\'lion\\':\", Zoo.identify_animal(lion))\nExplanation: Class Hierarchies\nBoth Python and C++ support multiple programming paradigms, making it relatively straightforward to map language features (e.g. class inheritance, free functions, etc.); many other features can be cleanly hidden, merely because the syntax is very similar or otherwise natural (e.g. overloading, abstract classes, static data members, etc.); and yet others map gracefully because their semantic intent is expressed clearly in the syntax (e.g. smart pointers, STL, etc.).\nThe following presents a range of C++ features that map naturally, and exercises them in Python.\nEnd of explanation\ncppyy.cppdef(\nnamespace Zoo {\n std::shared_ptr free_lion{new Lion{}};\n std::string identify_animal_smart(std::shared_ptr& smart) {\n return \"the animal is a lion\";\n }\n}\n)\n# shared pointers are presented transparently as the wrapped type\nprint(\"Type of the 'free_lion' global:\", type(Zoo.free_lion).__name__)\n# if need be, the smart pointer is accessible with a helper\nsmart_lion = Zoo.free_lion.__smartptr__()\nprint(\"Type of the 'free_lion' smart ptr:\", type(smart_lion).__name__)\n# pass through functions that expect a naked pointer or smart pointer\nprint(\"Dumb passing: \", Zoo.identify_animal(Zoo.free_lion))\nprint(\"Smart passing:\", Zoo.identify_animal_smart(Zoo.free_lion))\nExplanation: Modern C++\nAs C++ matures, more and more semantic intent (such as object ownership) is expressed in the syntax. This not for the benefit of bindings generators, but for the poor programmer having to read the code. Still, a bindings generator benefits greatly from this increased expression.\nEnd of explanation"}}},{"rowIdx":2973,"cells":{"Unnamed: 0":{"kind":"number","value":2973,"string":"2,973"},"text_prompt":{"kind":"string","value":"Given the following text description, write Python code to implement the functionality described below step by step\nDescription:\n Boosting a decision stump\nThe goal of this notebook is to implement your own boosting module.\nBrace yourselves! This is going to be a fun and challenging assignment.\nUse SFrames to do some feature engineering.\nModify the decision trees to incorporate weights.\nImplement Adaboost ensembling.\nUse your implementation of Adaboost to train a boosted decision stump ensemble.\nEvaluate the effect of boosting (adding more decision stumps) on performance of the model.\nExplore the robustness of Adaboost to overfitting.\nLet's get started!\nFire up GraphLab Create\nMake sure you have the latest version of GraphLab Create (1.8.3 or newer). Upgrade by\npip install graphlab-create --upgrade\nSee this page for detailed instructions on upgrading.\nStep1: Getting the data ready\nWe will be using the same LendingClub dataset as in the previous assignment.\nStep2: Extracting the target and the feature columns\nWe will now repeat some of the feature processing steps that we saw in the previous assignment\nStep3: Subsample dataset to make sure classes are balanced\nJust as we did in the previous assignment, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We use seed=1 so everyone gets the same results.\nStep4: Note\nStep5: Let's see what the feature columns look like now\nStep6: Train-test split\nWe split the data into training and test sets with 80% of the data in the training set and 20% of the data in the test set. We use seed=1 so that everyone gets the same result.\nStep7: Weighted decision trees\nLet's modify our decision tree code from Module 5 to support weighting of individual data points.\nWeighted error definition\nConsider a model with $N$ data points with\nStep8: Checkpoint\nStep9: Recall that the classification error is defined as follows\nStep10: Checkpoint\nStep11: Note. If you get an exception in the line of \"the logical filter has different size than the array\", try upgradting your GraphLab Create installation to 1.8.3 or newer.\nVery Optional. Relationship between weighted error and weight of mistakes\nBy definition, the weighted error is the weight of mistakes divided by the weight of all data points, so\n$$\n\\mathrm{E}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}}) = \\frac{\\sum_{i=1}^{n} \\alpha_i \\times 1[y_i \\neq \\hat{y_i}]}{\\sum_{i=1}^{n} \\alpha_i} = \\frac{\\mathrm{WM}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}})}{\\sum_{i=1}^{n} \\alpha_i}.\n$$\nIn the code above, we obtain $\\mathrm{E}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}})$ from the two weights of mistakes from both sides, $\\mathrm{WM}(\\mathbf{\\alpha}{\\mathrm{left}}, \\mathbf{\\hat{y}}{\\mathrm{left}})$ and $\\mathrm{WM}(\\mathbf{\\alpha}{\\mathrm{right}}, \\mathbf{\\hat{y}}{\\mathrm{right}})$. First, notice that the overall weight of mistakes $\\mathrm{WM}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}})$ can be broken into two weights of mistakes over either side of the split\nStep12: We provide a function that learns a weighted decision tree recursively and implements 3 stopping conditions\nStep13: Here is a recursive function to count the nodes in your tree\nStep14: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.\nStep15: Let us take a quick look at what the trained tree is like. You should get something that looks like the following\n{'is_leaf'\nStep16: Making predictions with a weighted decision tree\nWe give you a function that classifies one data point. It can also return the probability if you want to play around with that as well.\nStep17: Evaluating the tree\nNow, we will write a function to evaluate a decision tree by computing the classification error of the tree on the given dataset.\nAgain, recall that the classification error is defined as follows\nStep18: Example\nStep19: Now, we will compute the classification error on the subset_20, i.e. the subset of data points whose weight is 1 (namely the first and last 10 data points).\nStep20: Now, let us compare the classification error of the model small_data_decision_tree_subset_20 on the entire test set train_data\nStep21: The model small_data_decision_tree_subset_20 performs a lot better on subset_20 than on train_data.\nSo, what does this mean?\n* The points with higher weights are the ones that are more important during the training process of the weighted decision tree.\n* The points with zero weights are basically ignored during training.\nQuiz Question\nStep22: Checking your Adaboost code\nTrain an ensemble of two tree stumps and see which features those stumps split on. We will run the algorithm with the following parameters\nStep23: Here is what the first stump looks like\nStep24: Here is what the next stump looks like\nStep25: If your Adaboost is correctly implemented, the following things should be true\nStep26: Making predictions\nRecall from the lecture that in order to make predictions, we use the following formula\nStep27: Now, let us take a quick look what the stump_weights look like at the end of each iteration of the 10-stump ensemble\nStep28: Quiz Question\nStep29: Computing training error at the end of each iteration\nNow, we will compute the classification error on the train_data and see how it is reduced as trees are added.\nStep30: Visualizing training error vs number of iterations\nWe have provided you with a simple code snippet that plots classification error with the number of iterations.\nStep31: Quiz Question\nStep32: Visualize both the training and test errors\nNow, let us plot the training & test error with the number of iterations."},"code_prompt":{"kind":"string","value":"Python Code:\nimport graphlab\nimport matplotlib.pyplot as plt\n%matplotlib inline\nExplanation: Boosting a decision stump\nThe goal of this notebook is to implement your own boosting module.\nBrace yourselves! This is going to be a fun and challenging assignment.\nUse SFrames to do some feature engineering.\nModify the decision trees to incorporate weights.\nImplement Adaboost ensembling.\nUse your implementation of Adaboost to train a boosted decision stump ensemble.\nEvaluate the effect of boosting (adding more decision stumps) on performance of the model.\nExplore the robustness of Adaboost to overfitting.\nLet's get started!\nFire up GraphLab Create\nMake sure you have the latest version of GraphLab Create (1.8.3 or newer). Upgrade by\npip install graphlab-create --upgrade\nSee this page for detailed instructions on upgrading.\nEnd of explanation\nloans = graphlab.SFrame('lending-club-data.gl/')\nExplanation: Getting the data ready\nWe will be using the same LendingClub dataset as in the previous assignment.\nEnd of explanation\nfeatures = ['grade', # grade of the loan\n 'term', # the term of the loan\n 'home_ownership', # home ownership status: own, mortgage or rent\n 'emp_length', # number of years of employment\n ]\nloans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)\nloans.remove_column('bad_loans')\ntarget = 'safe_loans'\nloans = loans[features + [target]]\nExplanation: Extracting the target and the feature columns\nWe will now repeat some of the feature processing steps that we saw in the previous assignment:\nFirst, we re-assign the target to have +1 as a safe (good) loan, and -1 as a risky (bad) loan.\nNext, we select four categorical features: \n1. grade of the loan \n2. the length of the loan term\n3. the home ownership status: own, mortgage, rent\n4. number of years of employment.\nEnd of explanation\nsafe_loans_raw = loans[loans[target] == 1]\nrisky_loans_raw = loans[loans[target] == -1]\n# Undersample the safe loans.\npercentage = len(risky_loans_raw)/float(len(safe_loans_raw))\nrisky_loans = risky_loans_raw\nsafe_loans = safe_loans_raw.sample(percentage, seed=1)\nloans_data = risky_loans_raw.append(safe_loans)\nprint \"Percentage of safe loans :\", len(safe_loans) / float(len(loans_data))\nprint \"Percentage of risky loans :\", len(risky_loans) / float(len(loans_data))\nprint \"Total number of loans in our new dataset :\", len(loans_data)\nExplanation: Subsample dataset to make sure classes are balanced\nJust as we did in the previous assignment, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We use seed=1 so everyone gets the same results.\nEnd of explanation\nloans_data = risky_loans.append(safe_loans)\nfor feature in features:\n loans_data_one_hot_encoded = loans_data[feature].apply(lambda x: {x: 1}) \n loans_data_unpacked = loans_data_one_hot_encoded.unpack(column_name_prefix=feature)\n \n # Change None's to 0's\n for column in loans_data_unpacked.column_names():\n loans_data_unpacked[column] = loans_data_unpacked[column].fillna(0)\n loans_data.remove_column(feature)\n loans_data.add_columns(loans_data_unpacked)\nExplanation: Note: There are many approaches for dealing with imbalanced data, including some where we modify the learning algorithm. These approaches are beyond the scope of this course, but some of them are reviewed in this paper. For this assignment, we use the simplest possible approach, where we subsample the overly represented class to get a more balanced dataset. In general, and especially when the data is highly imbalanced, we recommend using more advanced methods.\nTransform categorical data into binary features\nIn this assignment, we will work with binary decision trees. Since all of our features are currently categorical features, we want to turn them into binary features using 1-hot encoding. \nWe can do so with the following code block (see the first assignments for more details):\nEnd of explanation\nfeatures = loans_data.column_names()\nfeatures.remove('safe_loans') # Remove the response variable\nfeatures\nExplanation: Let's see what the feature columns look like now:\nEnd of explanation\ntrain_data, test_data = loans_data.random_split(0.8, seed=1)\nExplanation: Train-test split\nWe split the data into training and test sets with 80% of the data in the training set and 20% of the data in the test set. We use seed=1 so that everyone gets the same result.\nEnd of explanation\ndef intermediate_node_weighted_mistakes(labels_in_node, data_weights):\n # Sum the weights of all entries with label +1\n total_weight_positive = sum(data_weights[labels_in_node == +1])\n \n # Weight of mistakes for predicting all -1's is equal to the sum above\n ### YOUR CODE HERE\n weighted_mistakes_all_negative = total_weight_positive\n \n # Sum the weights of all entries with label -1\n ### YOUR CODE HERE\n total_weight_negative = sum(data_weights[labels_in_node == -1])\n \n # Weight of mistakes for predicting all +1's is equal to the sum above\n ### YOUR CODE HERE\n weighted_mistakes_all_positive = total_weight_negative\n \n # Return the tuple (weight, class_label) representing the lower of the two weights\n # class_label should be an integer of value +1 or -1.\n # If the two weights are identical, return (weighted_mistakes_all_positive,+1)\n ### YOUR CODE HERE\n if weighted_mistakes_all_positive <= weighted_mistakes_all_negative:\n return weighted_mistakes_all_positive, +1\n else:\n return weighted_mistakes_all_negative, -1\nExplanation: Weighted decision trees\nLet's modify our decision tree code from Module 5 to support weighting of individual data points.\nWeighted error definition\nConsider a model with $N$ data points with:\n* Predictions $\\hat{y}_1 ... \\hat{y}_n$ \n* Target $y_1 ... y_n$ \n* Data point weights $\\alpha_1 ... \\alpha_n$.\nThen the weighted error is defined by:\n$$\n\\mathrm{E}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}}) = \\frac{\\sum_{i=1}^{n} \\alpha_i \\times 1[y_i \\neq \\hat{y_i}]}{\\sum_{i=1}^{n} \\alpha_i}\n$$\nwhere $1[y_i \\neq \\hat{y_i}]$ is an indicator function that is set to $1$ if $y_i \\neq \\hat{y_i}$.\nWrite a function to compute weight of mistakes\nWrite a function that calculates the weight of mistakes for making the \"weighted-majority\" predictions for a dataset. The function accepts two inputs:\n* labels_in_node: Targets $y_1 ... y_n$ \n* data_weights: Data point weights $\\alpha_1 ... \\alpha_n$\nWe are interested in computing the (total) weight of mistakes, i.e.\n$$\n\\mathrm{WM}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}}) = \\sum_{i=1}^{n} \\alpha_i \\times 1[y_i \\neq \\hat{y_i}].\n$$\nThis quantity is analogous to the number of mistakes, except that each mistake now carries different weight. It is related to the weighted error in the following way:\n$$\n\\mathrm{E}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}}) = \\frac{\\mathrm{WM}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}})}{\\sum_{i=1}^{n} \\alpha_i}\n$$\nThe function intermediate_node_weighted_mistakes should first compute two weights: \n * $\\mathrm{WM}{-1}$: weight of mistakes when all predictions are $\\hat{y}_i = -1$ i.e $\\mathrm{WM}(\\mathbf{\\alpha}, \\mathbf{-1}$)\n * $\\mathrm{WM}{+1}$: weight of mistakes when all predictions are $\\hat{y}_i = +1$ i.e $\\mbox{WM}(\\mathbf{\\alpha}, \\mathbf{+1}$)\nwhere $\\mathbf{-1}$ and $\\mathbf{+1}$ are vectors where all values are -1 and +1 respectively.\nAfter computing $\\mathrm{WM}{-1}$ and $\\mathrm{WM}{+1}$, the function intermediate_node_weighted_mistakes should return the lower of the two weights of mistakes, along with the class associated with that weight. We have provided a skeleton for you with YOUR CODE HERE to be filled in several places.\nEnd of explanation\nexample_labels = graphlab.SArray([-1, -1, 1, 1, 1])\nexample_data_weights = graphlab.SArray([1., 2., .5, 1., 1.])\nif intermediate_node_weighted_mistakes(example_labels, example_data_weights) == (2.5, -1):\n print 'Test passed!'\nelse:\n print 'Test failed... try again!'\nExplanation: Checkpoint: Test your intermediate_node_weighted_mistakes function, run the following cell:\nEnd of explanation\n# If the data is identical in each feature, this function should return None\ndef best_splitting_feature(data, features, target, data_weights):\n print data_weights\n \n # These variables will keep track of the best feature and the corresponding error\n best_feature = None\n best_error = float('+inf') \n num_points = float(len(data))\n # Loop through each feature to consider splitting on that feature\n for feature in features:\n \n # The left split will have all data points where the feature value is 0\n # The right split will have all data points where the feature value is 1\n left_split = data[data[feature] == 0]\n right_split = data[data[feature] == 1]\n \n # Apply the same filtering to data_weights to create left_data_weights, right_data_weights\n ## YOUR CODE HERE\n left_data_weights = data_weights[data[feature] == 0]\n right_data_weights = data_weights[data[feature] == 1]\n \n # DIFFERENT HERE\n # Calculate the weight of mistakes for left and right sides\n ## YOUR CODE HERE\n left_weighted_mistakes, left_class = intermediate_node_weighted_mistakes(left_split[target], left_data_weights)\n right_weighted_mistakes, right_class = intermediate_node_weighted_mistakes(right_split[target], right_data_weights)\n \n # DIFFERENT HERE\n # Compute weighted error by computing\n # ( [weight of mistakes (left)] + [weight of mistakes (right)] ) / [total weight of all data points]\n ## YOUR CODE HERE\n error = (left_weighted_mistakes + right_weighted_mistakes) / (sum(left_data_weights) + sum(right_data_weights))\n \n # If this is the best error we have found so far, store the feature and the error\n if error < best_error:\n best_feature = feature\n best_error = error\n \n # Return the best feature we found\n return best_feature\nExplanation: Recall that the classification error is defined as follows:\n$$\n\\mbox{classification error} = \\frac{\\mbox{# mistakes}}{\\mbox{# all data points}}\n$$\nQuiz Question: If we set the weights $\\mathbf{\\alpha} = 1$ for all data points, how is the weight of mistakes $\\mbox{WM}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}})$ related to the classification error?\nFunction to pick best feature to split on\nWe continue modifying our decision tree code from the earlier assignment to incorporate weighting of individual data points. The next step is to pick the best feature to split on.\nThe best_splitting_feature function is similar to the one from the earlier assignment with two minor modifications:\n 1. The function best_splitting_feature should now accept an extra parameter data_weights to take account of weights of data points.\n 2. Instead of computing the number of mistakes in the left and right side of the split, we compute the weight of mistakes for both sides, add up the two weights, and divide it by the total weight of the data.\nComplete the following function. Comments starting with DIFFERENT HERE mark the sections where the weighted version differs from the original implementation.\nEnd of explanation\nexample_data_weights = graphlab.SArray(len(train_data)* [1.5])\nif best_splitting_feature(train_data, features, target, example_data_weights) == 'term. 36 months':\n print 'Test passed!'\nelse:\n print 'Test failed... try again!'\nExplanation: Checkpoint: Now, we have another checkpoint to make sure you are on the right track.\nEnd of explanation\ndef create_leaf(target_values, data_weights):\n \n # Create a leaf node\n leaf = {'splitting_feature' : None,\n 'is_leaf': True}\n \n # Computed weight of mistakes.\n weighted_error, best_class = intermediate_node_weighted_mistakes(target_values, data_weights)\n # Store the predicted class (1 or -1) in leaf['prediction']\n leaf['prediction'] = best_class ## YOUR CODE HERE\n \n return leaf \nExplanation: Note. If you get an exception in the line of \"the logical filter has different size than the array\", try upgradting your GraphLab Create installation to 1.8.3 or newer.\nVery Optional. Relationship between weighted error and weight of mistakes\nBy definition, the weighted error is the weight of mistakes divided by the weight of all data points, so\n$$\n\\mathrm{E}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}}) = \\frac{\\sum_{i=1}^{n} \\alpha_i \\times 1[y_i \\neq \\hat{y_i}]}{\\sum_{i=1}^{n} \\alpha_i} = \\frac{\\mathrm{WM}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}})}{\\sum_{i=1}^{n} \\alpha_i}.\n$$\nIn the code above, we obtain $\\mathrm{E}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}})$ from the two weights of mistakes from both sides, $\\mathrm{WM}(\\mathbf{\\alpha}{\\mathrm{left}}, \\mathbf{\\hat{y}}{\\mathrm{left}})$ and $\\mathrm{WM}(\\mathbf{\\alpha}{\\mathrm{right}}, \\mathbf{\\hat{y}}{\\mathrm{right}})$. First, notice that the overall weight of mistakes $\\mathrm{WM}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}})$ can be broken into two weights of mistakes over either side of the split:\n$$\n\\mathrm{WM}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}})\n= \\sum_{i=1}^{n} \\alpha_i \\times 1[y_i \\neq \\hat{y_i}]\n= \\sum_{\\mathrm{left}} \\alpha_i \\times 1[y_i \\neq \\hat{y_i}]\n + \\sum_{\\mathrm{right}} \\alpha_i \\times 1[y_i \\neq \\hat{y_i}]\\\n= \\mathrm{WM}(\\mathbf{\\alpha}{\\mathrm{left}}, \\mathbf{\\hat{y}}{\\mathrm{left}}) + \\mathrm{WM}(\\mathbf{\\alpha}{\\mathrm{right}}, \\mathbf{\\hat{y}}{\\mathrm{right}})\n$$\nWe then divide through by the total weight of all data points to obtain $\\mathrm{E}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}})$:\n$$\n\\mathrm{E}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}})\n= \\frac{\\mathrm{WM}(\\mathbf{\\alpha}{\\mathrm{left}}, \\mathbf{\\hat{y}}{\\mathrm{left}}) + \\mathrm{WM}(\\mathbf{\\alpha}{\\mathrm{right}}, \\mathbf{\\hat{y}}{\\mathrm{right}})}{\\sum_{i=1}^{n} \\alpha_i}\n$$\nBuilding the tree\nWith the above functions implemented correctly, we are now ready to build our decision tree. Recall from the previous assignments that each node in the decision tree is represented as a dictionary which contains the following keys:\n{ \n 'is_leaf' : True/False.\n 'prediction' : Prediction at the leaf node.\n 'left' : (dictionary corresponding to the left tree).\n 'right' : (dictionary corresponding to the right tree).\n 'features_remaining' : List of features that are posible splits.\n}\nLet us start with a function that creates a leaf node given a set of target values:\nEnd of explanation\ndef weighted_decision_tree_create(data, features, target, data_weights, current_depth = 1, max_depth = 10):\n remaining_features = features[:] # Make a copy of the features.\n target_values = data[target]\n print \"--------------------------------------------------------------------\"\n print \"Subtree, depth = %s (%s data points).\" % (current_depth, len(target_values))\n \n # Stopping condition 1. Error is 0.\n if intermediate_node_weighted_mistakes(target_values, data_weights)[0] <= 1e-15:\n print \"Stopping condition 1 reached.\" \n return create_leaf(target_values, data_weights)\n \n # Stopping condition 2. No more features.\n if remaining_features == []:\n print \"Stopping condition 2 reached.\" \n return create_leaf(target_values, data_weights) \n \n # Additional stopping condition (limit tree depth)\n if current_depth > max_depth:\n print \"Reached maximum depth. Stopping for now.\"\n return create_leaf(target_values, data_weights)\n \n # If all the datapoints are the same, splitting_feature will be None. Create a leaf\n splitting_feature = best_splitting_feature(data, features, target, data_weights)\n remaining_features.remove(splitting_feature)\n \n left_split = data[data[splitting_feature] == 0]\n right_split = data[data[splitting_feature] == 1]\n \n left_data_weights = data_weights[data[splitting_feature] == 0]\n right_data_weights = data_weights[data[splitting_feature] == 1]\n \n print \"Split on feature %s. (%s, %s)\" % (\\\n splitting_feature, len(left_split), len(right_split))\n \n # Create a leaf node if the split is \"perfect\"\n if len(left_split) == len(data):\n print \"Creating leaf node.\"\n return create_leaf(left_split[target], data_weights)\n if len(right_split) == len(data):\n print \"Creating leaf node.\"\n return create_leaf(right_split[target], data_weights)\n \n # Repeat (recurse) on left and right subtrees\n left_tree = weighted_decision_tree_create(\n left_split, remaining_features, target, left_data_weights, current_depth + 1, max_depth)\n right_tree = weighted_decision_tree_create(\n right_split, remaining_features, target, right_data_weights, current_depth + 1, max_depth)\n \n return {'is_leaf' : False, \n 'prediction' : None,\n 'splitting_feature': splitting_feature,\n 'left' : left_tree, \n 'right' : right_tree}\nExplanation: We provide a function that learns a weighted decision tree recursively and implements 3 stopping conditions:\n1. All data points in a node are from the same class.\n2. No more features to split on.\n3. Stop growing the tree when the tree depth reaches max_depth.\nEnd of explanation\ndef count_nodes(tree):\n if tree['is_leaf']:\n return 1\n return 1 + count_nodes(tree['left']) + count_nodes(tree['right'])\nExplanation: Here is a recursive function to count the nodes in your tree:\nEnd of explanation\nexample_data_weights = graphlab.SArray([1.0 for i in range(len(train_data))])\nsmall_data_decision_tree = weighted_decision_tree_create(train_data, features, target,\n example_data_weights, max_depth=2)\nif count_nodes(small_data_decision_tree) == 7:\n print 'Test passed!'\nelse:\n print 'Test failed... try again!'\n print 'Number of nodes found:', count_nodes(small_data_decision_tree)\n print 'Number of nodes that should be there: 7' \nExplanation: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.\nEnd of explanation\nsmall_data_decision_tree\nExplanation: Let us take a quick look at what the trained tree is like. You should get something that looks like the following\n{'is_leaf': False,\n 'left': {'is_leaf': False,\n 'left': {'is_leaf': True, 'prediction': -1, 'splitting_feature': None},\n 'prediction': None,\n 'right': {'is_leaf': True, 'prediction': 1, 'splitting_feature': None},\n 'splitting_feature': 'grade.A'\n },\n 'prediction': None,\n 'right': {'is_leaf': False,\n 'left': {'is_leaf': True, 'prediction': 1, 'splitting_feature': None},\n 'prediction': None,\n 'right': {'is_leaf': True, 'prediction': -1, 'splitting_feature': None},\n 'splitting_feature': 'grade.D'\n },\n 'splitting_feature': 'term. 36 months'\n}\nEnd of explanation\ndef classify(tree, x, annotate = False): \n # If the node is a leaf node.\n if tree['is_leaf']:\n if annotate: \n print \"At leaf, predicting %s\" % tree['prediction']\n return tree['prediction'] \n else:\n # Split on feature.\n split_feature_value = x[tree['splitting_feature']]\n if annotate: \n print \"Split on %s = %s\" % (tree['splitting_feature'], split_feature_value)\n if split_feature_value == 0:\n return classify(tree['left'], x, annotate)\n else:\n return classify(tree['right'], x, annotate)\nExplanation: Making predictions with a weighted decision tree\nWe give you a function that classifies one data point. It can also return the probability if you want to play around with that as well.\nEnd of explanation\ndef evaluate_classification_error(tree, data):\n # Apply the classify(tree, x) to each row in your data\n prediction = data.apply(lambda x: classify(tree, x))\n \n # Once you've made the predictions, calculate the classification error\n return (prediction != data[target]).sum() / float(len(data))\nevaluate_classification_error(small_data_decision_tree, test_data)\nExplanation: Evaluating the tree\nNow, we will write a function to evaluate a decision tree by computing the classification error of the tree on the given dataset.\nAgain, recall that the classification error is defined as follows:\n$$\n\\mbox{classification error} = \\frac{\\mbox{# mistakes}}{\\mbox{# all data points}}\n$$\nThe function called evaluate_classification_error takes in as input:\n1. tree (as described above)\n2. data (an SFrame)\nThe function does not change because of adding data point weights.\nEnd of explanation\n# Assign weights\nexample_data_weights = graphlab.SArray([1.] * 10 + [0.]*(len(train_data) - 20) + [1.] * 10)\n# Train a weighted decision tree model.\nsmall_data_decision_tree_subset_20 = weighted_decision_tree_create(train_data, features, target,\n example_data_weights, max_depth=2)\nExplanation: Example: Training a weighted decision tree\nTo build intuition on how weighted data points affect the tree being built, consider the following:\nSuppose we only care about making good predictions for the first 10 and last 10 items in train_data, we assign weights:\n* 1 to the last 10 items \n* 1 to the first 10 items \n* and 0 to the rest. \nLet us fit a weighted decision tree with max_depth = 2.\nEnd of explanation\nsubset_20 = train_data.head(10).append(train_data.tail(10))\nevaluate_classification_error(small_data_decision_tree_subset_20, subset_20)\nExplanation: Now, we will compute the classification error on the subset_20, i.e. the subset of data points whose weight is 1 (namely the first and last 10 data points).\nEnd of explanation\nevaluate_classification_error(small_data_decision_tree_subset_20, train_data)\nExplanation: Now, let us compare the classification error of the model small_data_decision_tree_subset_20 on the entire test set train_data:\nEnd of explanation\nfrom math import log\nfrom math import exp\ndef adaboost_with_tree_stumps(data, features, target, num_tree_stumps):\n # start with unweighted data\n alpha = graphlab.SArray([1.]*len(data))\n weights = []\n tree_stumps = []\n target_values = data[target]\n \n for t in xrange(num_tree_stumps):\n print '====================================================='\n print 'Adaboost Iteration %d' % t\n print '=====================================================' \n # Learn a weighted decision tree stump. Use max_depth=1\n tree_stump = weighted_decision_tree_create(data, features, target, data_weights=alpha, max_depth=1)\n tree_stumps.append(tree_stump)\n \n # Make predictions\n predictions = data.apply(lambda x: classify(tree_stump, x))\n \n # Produce a Boolean array indicating whether\n # each data point was correctly classified\n is_correct = predictions == target_values\n is_wrong = predictions != target_values\n \n # Compute weighted error\n # YOUR CODE HERE\n weighted_error = sum(alpha * is_wrong) / sum(alpha)\n \n # Compute model coefficient using weighted error\n # YOUR CODE HERE\n weight = .5 * log((1 - weighted_error) / weighted_error)\n weights.append(weight)\n \n # Adjust weights on data point\n adjustment = is_correct.apply(lambda is_correct : exp(-weight) if is_correct else exp(weight))\n \n # Scale alpha by multiplying by adjustment \n # Then normalize data points weights\n ## YOUR CODE HERE \n alpha *= adjustment\n alpha /= sum(alpha)\n \n return weights, tree_stumps\nExplanation: The model small_data_decision_tree_subset_20 performs a lot better on subset_20 than on train_data.\nSo, what does this mean?\n* The points with higher weights are the ones that are more important during the training process of the weighted decision tree.\n* The points with zero weights are basically ignored during training.\nQuiz Question: Will you get the same model as small_data_decision_tree_subset_20 if you trained a decision tree with only the 20 data points with non-zero weights from the set of points in subset_20?\nImplementing your own Adaboost (on decision stumps)\nNow that we have a weighted decision tree working, it takes only a bit of work to implement Adaboost. For the sake of simplicity, let us stick with decision tree stumps by training trees with max_depth=1.\nRecall from the lecture the procedure for Adaboost:\n1. Start with unweighted data with $\\alpha_j = 1$\n2. For t = 1,...T:\n * Learn $f_t(x)$ with data weights $\\alpha_j$\n * Compute coefficient $\\hat{w}t$:\n $$\\hat{w}_t = \\frac{1}{2}\\ln{\\left(\\frac{1- \\mbox{E}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}})}{\\mbox{E}(\\mathbf{\\alpha}, \\mathbf{\\hat{y}})}\\right)}$$\n * Re-compute weights $\\alpha_j$:\n $$\\alpha_j \\gets \\begin{cases}\n \\alpha_j \\exp{(-\\hat{w}_t)} & \\text{ if }f_t(x_j) = y_j\\\n \\alpha_j \\exp{(\\hat{w}_t)} & \\text{ if }f_t(x_j) \\neq y_j\n \\end{cases}$$\n * Normalize weights $\\alpha_j$:\n $$\\alpha_j \\gets \\frac{\\alpha_j}{\\sum{i=1}^{N}{\\alpha_i}} $$\nComplete the skeleton for the following code to implement adaboost_with_tree_stumps. Fill in the places with YOUR CODE HERE.\nEnd of explanation\nstump_weights, tree_stumps = adaboost_with_tree_stumps(train_data, features, target, num_tree_stumps=2)\ndef print_stump(tree):\n split_name = tree['splitting_feature'] # split_name is something like 'term. 36 months'\n if split_name is None:\n print \"(leaf, label: %s)\" % tree['prediction']\n return None\n split_feature, split_value = split_name.split('.')\n print ' root'\n print ' |---------------|----------------|'\n print ' | |'\n print ' | |'\n print ' | |'\n print ' [{0} == 0]{1}[{0} == 1] '.format(split_name, ' '*(27-len(split_name)))\n print ' | |'\n print ' | |'\n print ' | |'\n print ' (%s) (%s)' \\\n % (('leaf, label: ' + str(tree['left']['prediction']) if tree['left']['is_leaf'] else 'subtree'),\n ('leaf, label: ' + str(tree['right']['prediction']) if tree['right']['is_leaf'] else 'subtree'))\nExplanation: Checking your Adaboost code\nTrain an ensemble of two tree stumps and see which features those stumps split on. We will run the algorithm with the following parameters:\n* train_data\n* features\n* target\n* num_tree_stumps = 2\nEnd of explanation\nprint_stump(tree_stumps[0])\nExplanation: Here is what the first stump looks like:\nEnd of explanation\nprint_stump(tree_stumps[1])\nprint stump_weights\nExplanation: Here is what the next stump looks like:\nEnd of explanation\nstump_weights, tree_stumps = adaboost_with_tree_stumps(train_data, features, \n target, num_tree_stumps=10)\nExplanation: If your Adaboost is correctly implemented, the following things should be true:\ntree_stumps[0] should split on term. 36 months with the prediction -1 on the left and +1 on the right.\ntree_stumps[1] should split on grade.A with the prediction -1 on the left and +1 on the right.\nWeights should be approximately [0.158, 0.177] \nReminders\n- Stump weights ($\\mathbf{\\hat{w}}$) and data point weights ($\\mathbf{\\alpha}$) are two different concepts.\n- Stump weights ($\\mathbf{\\hat{w}}$) tell you how important each stump is while making predictions with the entire boosted ensemble.\n- Data point weights ($\\mathbf{\\alpha}$) tell you how important each data point is while training a decision stump.\nTraining a boosted ensemble of 10 stumps\nLet us train an ensemble of 10 decision tree stumps with Adaboost. We run the adaboost_with_tree_stumps function with the following parameters:\n* train_data\n* features\n* target\n* num_tree_stumps = 10\nEnd of explanation\ndef predict_adaboost(stump_weights, tree_stumps, data):\n scores = graphlab.SArray([0.]*len(data))\n \n for i, tree_stump in enumerate(tree_stumps):\n predictions = data.apply(lambda x: classify(tree_stump, x))\n \n # Accumulate predictions on scaores array\n # YOUR CODE HERE\n scores += stump_weights[i] * predictions\n \n return scores.apply(lambda score : +1 if score > 0 else -1)\npredictions = predict_adaboost(stump_weights, tree_stumps, test_data)\naccuracy = graphlab.evaluation.accuracy(test_data[target], predictions)\nprint 'Accuracy of 10-component ensemble = %s' % accuracy \nExplanation: Making predictions\nRecall from the lecture that in order to make predictions, we use the following formula:\n$$\n\\hat{y} = sign\\left(\\sum_{t=1}^T \\hat{w}_t f_t(x)\\right)\n$$\nWe need to do the following things:\n- Compute the predictions $f_t(x)$ using the $t$-th decision tree\n- Compute $\\hat{w}_t f_t(x)$ by multiplying the stump_weights with the predictions $f_t(x)$ from the decision trees\n- Sum the weighted predictions over each stump in the ensemble.\nComplete the following skeleton for making predictions:\nEnd of explanation\nstump_weights\nExplanation: Now, let us take a quick look what the stump_weights look like at the end of each iteration of the 10-stump ensemble:\nEnd of explanation\n# this may take a while... \nstump_weights, tree_stumps = adaboost_with_tree_stumps(train_data, \n features, target, num_tree_stumps=30)\nExplanation: Quiz Question: Are the weights monotonically decreasing, monotonically increasing, or neither?\nReminder: Stump weights ($\\mathbf{\\hat{w}}$) tell you how important each stump is while making predictions with the entire boosted ensemble.\nPerformance plots\nIn this section, we will try to reproduce some of the performance plots dicussed in the lecture.\nHow does accuracy change with adding stumps to the ensemble?\nWe will now train an ensemble with:\n* train_data\n* features\n* target\n* num_tree_stumps = 30\nOnce we are done with this, we will then do the following:\n* Compute the classification error at the end of each iteration.\n* Plot a curve of classification error vs iteration.\nFirst, lets train the model.\nEnd of explanation\nerror_all = []\nfor n in xrange(1, 31):\n predictions = predict_adaboost(stump_weights[:n], tree_stumps[:n], train_data)\n error = 1.0 - graphlab.evaluation.accuracy(train_data[target], predictions)\n error_all.append(error)\n print \"Iteration %s, training error = %s\" % (n, error_all[n-1])\nExplanation: Computing training error at the end of each iteration\nNow, we will compute the classification error on the train_data and see how it is reduced as trees are added.\nEnd of explanation\nplt.rcParams['figure.figsize'] = 7, 5\nplt.plot(range(1,31), error_all, '-', linewidth=4.0, label='Training error')\nplt.title('Performance of Adaboost ensemble')\nplt.xlabel('# of iterations')\nplt.ylabel('Classification error')\nplt.legend(loc='best', prop={'size':15})\nplt.rcParams.update({'font.size': 16})\nExplanation: Visualizing training error vs number of iterations\nWe have provided you with a simple code snippet that plots classification error with the number of iterations.\nEnd of explanation\ntest_error_all = []\nfor n in xrange(1, 31):\n predictions = predict_adaboost(stump_weights[:n], tree_stumps[:n], test_data)\n error = 1.0 - graphlab.evaluation.accuracy(test_data[target], predictions)\n test_error_all.append(error)\n print \"Iteration %s, test error = %s\" % (n, test_error_all[n-1])\nExplanation: Quiz Question: Which of the following best describes a general trend in accuracy as we add more and more components? Answer based on the 30 components learned so far.\nTraining error goes down monotonically, i.e. the training error reduces with each iteration but never increases.\nTraining error goes down in general, with some ups and downs in the middle.\nTraining error goes up in general, with some ups and downs in the middle.\nTraining error goes down in the beginning, achieves the best error, and then goes up sharply.\nNone of the above\nEvaluation on the test data\nPerforming well on the training data is cheating, so lets make sure it works on the test_data as well. Here, we will compute the classification error on the test_data at the end of each iteration.\nEnd of explanation\nplt.rcParams['figure.figsize'] = 7, 5\nplt.plot(range(1,31), error_all, '-', linewidth=4.0, label='Training error')\nplt.plot(range(1,31), test_error_all, '-', linewidth=4.0, label='Test error')\nplt.title('Performance of Adaboost ensemble')\nplt.xlabel('# of iterations')\nplt.ylabel('Classification error')\nplt.rcParams.update({'font.size': 16})\nplt.legend(loc='best', prop={'size':15})\nplt.tight_layout()\nExplanation: Visualize both the training and test errors\nNow, let us plot the training & test error with the number of iterations.\nEnd of explanation"}}},{"rowIdx":2974,"cells":{"Unnamed: 0":{"kind":"number","value":2974,"string":"2,974"},"text_prompt":{"kind":"string","value":"Given the following text description, write Python code to implement the functionality described below step by step\nDescription:\n Creating a document similarity microservice for the Reuters-21578 dataset.\nFirst download the Reuters-21578 dataset in JSON format into the local folder\nStep1: Create a gensim LSI document similarity model\nStep2: Run accuracy tests\nRun a test over the document to compute average jaccard similarity to the 1-nearest neighbour for each document using the \"tags\" field of the meta data as the ground truth.\nStep3: Run a test again but use the Annoy approximate nearest neighbour index that would have been built. Should be much faster.\nStep4: Run single nearest neighbour query\nRun a nearest neighbour query on a single document and print the title and tag meta data\nStep5: Save recommender\nSave the recommender to the filesystem in reuters_recommender folder\nStep6: Start a microservice to serve the recommender"},"code_prompt":{"kind":"string","value":"Python Code:\nimport json\nimport codecs \nimport os\ndocs = []\nfor filename in os.listdir(\"reuters-21578-json/data/full\"):\n f = open(\"reuters-21578-json/data/full/\"+filename)\n js = json.load(f)\n for j in js:\n if 'topics' in j and 'body' in j:\n d = {}\n d[\"id\"] = j['id']\n d[\"text\"] = j['body'].replace(\"\\n\",\"\")\n d[\"title\"] = j['title']\n d[\"tags\"] = \",\".join(j['topics'])\n docs.append(d)\nprint \"loaded \",len(docs),\" documents\"\nExplanation: Creating a document similarity microservice for the Reuters-21578 dataset.\nFirst download the Reuters-21578 dataset in JSON format into the local folder:\nbash\ngit clone https://github.com/fergiemcdowall/reuters-21578-json\nThe first step will be to convert this into the default corpus format we use:\nEnd of explanation\nfrom seldon.text import DocumentSimilarity,DefaultJsonCorpus\nimport logging\nlogger = logging.getLogger()\nlogger.setLevel(logging.INFO)\ncorpus = DefaultJsonCorpus(docs)\nds = DocumentSimilarity(model_type='gensim_lsi')\nds.fit(corpus)\nprint \"done\"\nExplanation: Create a gensim LSI document similarity model\nEnd of explanation\nds.score()\nExplanation: Run accuracy tests\nRun a test over the document to compute average jaccard similarity to the 1-nearest neighbour for each document using the \"tags\" field of the meta data as the ground truth.\nEnd of explanation\nds.score(approx=True)\nExplanation: Run a test again but use the Annoy approximate nearest neighbour index that would have been built. Should be much faster.\nEnd of explanation\nquery_doc=6023\nprint \"Query doc: \",ds.get_meta(query_doc)['title'],\"Tagged:\",ds.get_meta(query_doc)['tags']\nneighbours = ds.nn(query_doc,k=5,translate_id=True,approx=True)\nprint neighbours\nfor (doc_id,_) in neighbours:\n j = ds.get_meta(doc_id)\n print \"Doc id\",doc_id,j['title'],\"Tagged:\",j['tags']\nExplanation: Run single nearest neighbour query\nRun a nearest neighbour query on a single document and print the title and tag meta data\nEnd of explanation\nimport seldon\nrw = seldon.Recommender_wrapper()\nrw.save_recommender(ds,\"reuters_recommender\")\nprint \"done\"\nExplanation: Save recommender\nSave the recommender to the filesystem in reuters_recommender folder\nEnd of explanation\nfrom seldon.microservice import Microservices\nm = Microservices()\napp = m.create_recommendation_microservice(\"reuters_recommender\")\napp.run(host=\"0.0.0.0\",port=5000,debug=False)\nExplanation: Start a microservice to serve the recommender\nEnd of explanation"}}},{"rowIdx":2975,"cells":{"Unnamed: 0":{"kind":"number","value":2975,"string":"2,975"},"text_prompt":{"kind":"string","value":"Given the following text description, write Python code to implement the functionality described below step by step\nDescription:\n Convolutional Autoencoder\nSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.\nStep1: Network Architecture\nThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.\nHere our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.\nWhat's going on with the decoder\nOkay, so the decoder has these \"Upsample\" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called \"transpose convolution\" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose. \nHowever, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.\nExercise\nStep2: Training\nAs before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.\nStep3: Denoising\nAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.\nSince this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.\nExercise\nStep4: Checking out the performance\nHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is."},"code_prompt":{"kind":"string","value":"Python Code:\n%matplotlib inline\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data', validation_size=0)\nimg = mnist.train.images[2]\nplt.imshow(img.reshape((28, 28)), cmap='Greys_r')\nExplanation: Convolutional Autoencoder\nSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.\nEnd of explanation\nlearning_rate = 0.001\ninputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')\ntargets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')\n### Encoder\nconv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)\n# Now 28x28x16\nmaxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')\n# Now 14x14x16\nconv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)\n# Now 14x14x8\nmaxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')\n# Now 7x7x8\nconv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)\n# Now 7x7x8\nencoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')\n# Now 4x4x8\n### Decoder\nupsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))\n# Now 7x7x8\nconv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)\n# Now 7x7x8\nupsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))\n# Now 14x14x8\nconv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)\n# Now 14x14x8\nupsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))\n# Now 28x28x8\nconv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)\n# Now 28x28x16\nlogits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)\n#Now 28x28x1\ndecoded = tf.nn.sigmoid(logits, name='decoded')\nloss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)\ncost = tf.reduce_mean(loss)\nopt = tf.train.AdamOptimizer(0.001).minimize(cost)\nExplanation: Network Architecture\nThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.\nHere our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.\nWhat's going on with the decoder\nOkay, so the decoder has these \"Upsample\" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called \"transpose convolution\" which is what you'll find with the TensorFlow API, with tf.nn.conv2d_transpose. \nHowever, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.\nExercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor.\nEnd of explanation\nsess = tf.Session()\nepochs = 20\nbatch_size = 200\nsess.run(tf.global_variables_initializer())\nfor e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n imgs = batch[0].reshape((-1, 28, 28, 1))\n batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,\n targets_: imgs})\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Training loss: {:.4f}\".format(batch_cost))\nfig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))\nin_imgs = mnist.test.images[:10]\nreconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})\nfor images, row in zip([in_imgs, reconstructed], axes):\n for img, ax in zip(images, row):\n ax.imshow(img.reshape((28, 28)), cmap='Greys_r')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\nfig.tight_layout(pad=0.1)\nsess.close()\nExplanation: Training\nAs before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.\nEnd of explanation\nlearning_rate = 0.001\ninputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')\ntargets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')\n### Encoder\nconv1 = \n# Now 28x28x32\nmaxpool1 = \n# Now 14x14x32\nconv2 = \n# Now 14x14x32\nmaxpool2 = \n# Now 7x7x32\nconv3 = \n# Now 7x7x16\nencoded = \n# Now 4x4x16\n### Decoder\nupsample1 = \n# Now 7x7x16\nconv4 = \n# Now 7x7x16\nupsample2 = \n# Now 14x14x16\nconv5 = \n# Now 14x14x32\nupsample3 = \n# Now 28x28x32\nconv6 = \n# Now 28x28x32\nlogits = \n#Now 28x28x1\n# Pass logits through sigmoid to get reconstructed image\ndecoded =\n# Pass logits through sigmoid and calculate the cross-entropy loss\nloss = \n# Get cost and define the optimizer\ncost = tf.reduce_mean(loss)\nopt = tf.train.AdamOptimizer(learning_rate).minimize(cost)\nsess = tf.Session()\nepochs = 100\nbatch_size = 200\n# Set's how much noise we're adding to the MNIST images\nnoise_factor = 0.5\nsess.run(tf.global_variables_initializer())\nfor e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n # Get images from the batch\n imgs = batch[0].reshape((-1, 28, 28, 1))\n \n # Add random noise to the input images\n noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)\n # Clip the images to be between 0 and 1\n noisy_imgs = np.clip(noisy_imgs, 0., 1.)\n \n # Noisy images as inputs, original images as targets\n batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,\n targets_: imgs})\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Training loss: {:.4f}\".format(batch_cost))\nExplanation: Denoising\nAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.\nSince this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.\nExercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.\nEnd of explanation\nfig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))\nin_imgs = mnist.test.images[:10]\nnoisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)\nnoisy_imgs = np.clip(noisy_imgs, 0., 1.)\nreconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})\nfor images, row in zip([noisy_imgs, reconstructed], axes):\n for img, ax in zip(images, row):\n ax.imshow(img.reshape((28, 28)), cmap='Greys_r')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\nfig.tight_layout(pad=0.1)\nExplanation: Checking out the performance\nHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.\nEnd of explanation"}}},{"rowIdx":2976,"cells":{"Unnamed: 0":{"kind":"number","value":2976,"string":"2,976"},"text_prompt":{"kind":"string","value":"Given the following text description, write Python code to implement the functionality described below step by step\nDescription:\n Copyright 2019 The TensorFlow Authors.\nStep1: Text classification with TensorFlow Lite Model Maker\n
\nThe TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.\nThis notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used text classification model to classify movie reviews on a mobile device. The text classification model classifies text into predefined categories. The inputs should be preprocessed text and the outputs are the probabilities of the categories. The dataset used in this tutorial are positive and negative movie reviews.\nPrerequisites\nInstall the required packages\nTo run this example, install the required packages, including the Model Maker package from the GitHub repo.\nEnd of explanation\nimport numpy as np\nimport os\nfrom tflite_model_maker import model_spec\nfrom tflite_model_maker import text_classifier\nfrom tflite_model_maker.config import ExportFormat\nfrom tflite_model_maker.text_classifier import AverageWordVecSpec\nfrom tflite_model_maker.text_classifier import DataLoader\nimport tensorflow as tf\nassert tf.__version__.startswith('2')\ntf.get_logger().setLevel('ERROR')\nExplanation: Import the required packages.\nEnd of explanation\ndata_dir = tf.keras.utils.get_file(\n fname='SST-2.zip',\n origin='https://dl.fbaipublicfiles.com/glue/data/SST-2.zip',\n extract=True)\ndata_dir = os.path.join(os.path.dirname(data_dir), 'SST-2')\nExplanation: Download the sample training data.\nIn this tutorial, we will use the SST-2 (Stanford Sentiment Treebank) which is one of the tasks in the GLUE benchmark. It contains 67,349 movie reviews for training and 872 movie reviews for testing. The dataset has two classes: positive and negative movie reviews.\nEnd of explanation\nimport pandas as pd\ndef replace_label(original_file, new_file):\n # Load the original file to pandas. We need to specify the separator as\n # '\\t' as the training data is stored in TSV format\n df = pd.read_csv(original_file, sep='\\t')\n # Define how we want to change the label name\n label_map = {0: 'negative', 1: 'positive'}\n # Excute the label change\n df.replace({'label': label_map}, inplace=True)\n # Write the updated dataset to a new file\n df.to_csv(new_file)\n# Replace the label name for both the training and test dataset. Then write the\n# updated CSV dataset to the current folder.\nreplace_label(os.path.join(os.path.join(data_dir, 'train.tsv')), 'train.csv')\nreplace_label(os.path.join(os.path.join(data_dir, 'dev.tsv')), 'dev.csv')\nExplanation: The SST-2 dataset is stored in TSV format. The only difference between TSV and CSV is that TSV uses a tab \\t character as its delimiter instead of a comma , in the CSV format.\nHere are the first 5 lines of the training dataset. label=0 means negative, label=1 means positive.\n| sentence | label | | | |\n|-------------------------------------------------------------------------------------------|-------|---|---|---|\n| hide new secretions from the parental units | 0 | | | |\n| contains no wit , only labored gags | 0 | | | |\n| that loves its characters and communicates something rather beautiful about human nature | 1 | | | |\n| remains utterly satisfied to remain the same throughout | 0 | | | |\n| on the worst revenge-of-the-nerds clichés the filmmakers could dredge up | 0 | | | |\nNext, we will load the dataset into a Pandas dataframe and change the current label names (0 and 1) to a more human-readable ones (negative and positive) and use them for model training.\nEnd of explanation\nspec = model_spec.get('average_word_vec')\nExplanation: Quickstart\nThere are five steps to train a text classification model:\nStep 1. Choose a text classification model architecture.\nHere we use the average word embedding model architecture, which will produce a small and fast model with decent accuracy.\nEnd of explanation\ntrain_data = DataLoader.from_csv(\n filename='train.csv',\n text_column='sentence',\n label_column='label',\n model_spec=spec,\n is_training=True)\ntest_data = DataLoader.from_csv(\n filename='dev.csv',\n text_column='sentence',\n label_column='label',\n model_spec=spec,\n is_training=False)\nExplanation: Model Maker also supports other model architectures such as BERT. If you are interested to learn about other architecture, see the Choose a model architecture for Text Classifier section below.\nStep 2. Load the training and test data, then preprocess them according to a specific model_spec.\nModel Maker can take input data in the CSV format. We will load the training and test dataset with the human-readable label name that were created earlier.\nEach model architecture requires input data to be processed in a particular way. DataLoader reads the requirement from model_spec and automatically executes the necessary preprocessing.\nEnd of explanation\nmodel = text_classifier.create(train_data, model_spec=spec, epochs=10)\nExplanation: Step 3. Train the TensorFlow model with the training data.\nThe average word embedding model use batch_size = 32 by default. Therefore you will see that it takes 2104 steps to go through the 67,349 sentences in the training dataset. We will train the model for 10 epochs, which means going through the training dataset 10 times.\nEnd of explanation\nloss, acc = model.evaluate(test_data)\nExplanation: Step 4. Evaluate the model with the test data.\nAfter training the text classification model using the sentences in the training dataset, we will use the remaining 872 sentences in the test dataset to evaluate how the model performs against new data it has never seen before.\nAs the default batch size is 32, it will take 28 steps to go through the 872 sentences in the test dataset.\nEnd of explanation\nmodel.export(export_dir='average_word_vec')\nExplanation: Step 5. Export as a TensorFlow Lite model.\nLet's export the text classification that we have trained in the TensorFlow Lite format. We will specify which folder to export the model.\nBy default, the float TFLite model is exported for the average word embedding model architecture.\nEnd of explanation\nmb_spec = model_spec.get('mobilebert_classifier')\nExplanation: You can download the TensorFlow Lite model file using the left sidebar of Colab. Go into the average_word_vec folder as we specified in export_dir parameter above, right-click on the model.tflite file and choose Download to download it to your local computer.\nThis model can be integrated into an Android or an iOS app using the NLClassifier API of the TensorFlow Lite Task Library.\nSee the TFLite Text Classification sample app for more details on how the model is used in a working app.\nNote 1: Android Studio Model Binding does not support text classification yet so please use the TensorFlow Lite Task Library.\nNote 2: There is a model.json file in the same folder with the TFLite model. It contains the JSON representation of the metadata bundled inside the TensorFlow Lite model. Model metadata helps the TFLite Task Library know what the model does and how to pre-process/post-process data for the model. You don't need to download the model.json file as it is only for informational purpose and its content is already inside the TFLite file.\nNote 3: If you train a text classification model using MobileBERT or BERT-Base architecture, you will need to use BertNLClassifier API instead to integrate the trained model into a mobile app.\nThe following sections walk through the example step by step to show more details.\nChoose a model architecture for Text Classifier\nEach model_spec object represents a specific model for the text classifier. TensorFlow Lite Model Maker currently supports MobileBERT, averaging word embeddings and BERT-Base models.\n| Supported Model | Name of model_spec | Model Description | Model size |\n|--------------------------|-------------------------|-----------------------------------------------------------------------------------------------------------------------|---------------------------------------------|\n| Averaging Word Embedding | 'average_word_vec' | Averaging text word embeddings with RELU activation. | <1MB |\n| MobileBERT | 'mobilebert_classifier' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device applications. | 25MB w/ quantization 100MB w/o quantization |\n| BERT-Base | 'bert_classifier' | Standard BERT model that is widely used in NLP tasks. | 300MB |\nIn the quick start, we have used the average word embedding model. Let's switch to MobileBERT to train a model with higher accuracy.\nEnd of explanation\ntrain_data = DataLoader.from_csv(\n filename='train.csv',\n text_column='sentence',\n label_column='label',\n model_spec=mb_spec,\n is_training=True)\ntest_data = DataLoader.from_csv(\n filename='dev.csv',\n text_column='sentence',\n label_column='label',\n model_spec=mb_spec,\n is_training=False)\nExplanation: Load training data\nYou can upload your own dataset to work through this tutorial. Upload your dataset by using the left sidebar in Colab.\n\nIf you prefer not to upload your dataset to the cloud, you can also locally run the library by following the guide.\nTo keep it simple, we will reuse the SST-2 dataset downloaded earlier. Let's use the DataLoader.from_csv method to load the data.\nPlease be noted that as we have changed the model architecture, we will need to reload the training and test dataset to apply the new preprocessing logic.\nEnd of explanation\nmodel = text_classifier.create(train_data, model_spec=mb_spec, epochs=3)\nExplanation: The Model Maker library also supports the from_folder() method to load data. It assumes that the text data of the same class are in the same subdirectory and that the subfolder name is the class name. Each text file contains one movie review sample. The class_labels parameter is used to specify which the subfolders.\nTrain a TensorFlow Model\nTrain a text classification model using the training data.\nNote: As MobileBERT is a complex model, each training epoch will takes about 10 minutes on a Colab GPU. Please make sure that you are using a GPU runtime.\nEnd of explanation\nmodel.summary()\nExplanation: Examine the detailed model structure.\nEnd of explanation\nloss, acc = model.evaluate(test_data)\nExplanation: Evaluate the model\nEvaluate the model that we have just trained using the test data and measure the loss and accuracy value.\nEnd of explanation\nmodel.export(export_dir='mobilebert/')\nExplanation: Export as a TensorFlow Lite model\nConvert the trained model to TensorFlow Lite model format with metadata so that you can later use in an on-device ML application. The label file and the vocab file are embedded in metadata. The default TFLite filename is model.tflite.\nIn many on-device ML application, the model size is an important factor. Therefore, it is recommended that you apply quantize the model to make it smaller and potentially run faster.\nThe default post-training quantization technique is dynamic range quantization for the BERT and MobileBERT models.\nEnd of explanation\nmodel.export(export_dir='mobilebert/', export_format=[ExportFormat.LABEL, ExportFormat.VOCAB])\nExplanation: The TensorFlow Lite model file can be integrated in a mobile app using the BertNLClassifier API in TensorFlow Lite Task Library. Please note that this is different from the NLClassifier API used to integrate the text classification trained with the average word vector model architecture.\nThe export formats can be one or a list of the following:\nExportFormat.TFLITE\nExportFormat.LABEL\nExportFormat.VOCAB\nExportFormat.SAVED_MODEL\nBy default, it exports only the TensorFlow Lite model file containing the model metadata. You can also choose to export other files related to the model for better examination. For instance, exporting only the label file and vocab file as follows:\nEnd of explanation\naccuracy = model.evaluate_tflite('mobilebert/model.tflite', test_data)\nprint('TFLite model accuracy: ', accuracy)\nExplanation: You can evaluate the TFLite model with evaluate_tflite method to measure its accuracy. Converting the trained TensorFlow model to TFLite format and apply quantization can affect its accuracy so it is recommended to evaluate the TFLite model accuracy before deployment.\nEnd of explanation\nnew_model_spec = model_spec.get('mobilebert_classifier')\nnew_model_spec.seq_len = 256\nExplanation: Advanced Usage\nThe create function is the driver function that the Model Maker library uses to create models. The model_spec parameter defines the model specification. The AverageWordVecSpec and BertClassifierSpec classes are currently supported. The create function comprises of the following steps:\nCreates the model for the text classifier according to model_spec.\nTrains the classifier model. The default epochs and the default batch size are set by the default_training_epochs and default_batch_size variables in the model_spec object.\nThis section covers advanced usage topics like adjusting the model and the training hyperparameters.\nCustomize the MobileBERT model hyperparameters\nThe model parameters you can adjust are:\nseq_len: Length of the sequence to feed into the model.\ninitializer_range: The standard deviation of the truncated_normal_initializer for initializing all weight matrices.\ntrainable: Boolean that specifies whether the pre-trained layer is trainable.\nThe training pipeline parameters you can adjust are:\nmodel_dir: The location of the model checkpoint files. If not set, a temporary directory will be used.\ndropout_rate: The dropout rate.\nlearning_rate: The initial learning rate for the Adam optimizer.\ntpu: TPU address to connect to.\nFor instance, you can set the seq_len=256 (default is 128). This allows the model to classify longer text.\nEnd of explanation\nnew_model_spec = AverageWordVecSpec(wordvec_dim=32)\nExplanation: Customize the average word embedding model hyperparameters\nYou can adjust the model infrastructure like the wordvec_dim and the seq_len variables in the AverageWordVecSpec class.\nFor example, you can train the model with a larger value of wordvec_dim. Note that you must construct a new model_spec if you modify the model.\nEnd of explanation\nnew_train_data = DataLoader.from_csv(\n filename='train.csv',\n text_column='sentence',\n label_column='label',\n model_spec=new_model_spec,\n is_training=True)\nExplanation: Get the preprocessed data.\nEnd of explanation\nmodel = text_classifier.create(new_train_data, model_spec=new_model_spec)\nExplanation: Train the new model.\nEnd of explanation\nmodel = text_classifier.create(new_train_data, model_spec=new_model_spec, epochs=20)\nExplanation: Tune the training hyperparameters\nYou can also tune the training hyperparameters like epochs and batch_size that affect the model accuracy. For instance,\nepochs: more epochs could achieve better accuracy, but may lead to overfitting.\nbatch_size: the number of samples to use in one training step.\nFor example, you can train with more epochs.\nEnd of explanation\nnew_test_data = DataLoader.from_csv(\n filename='dev.csv',\n text_column='sentence',\n label_column='label',\n model_spec=new_model_spec,\n is_training=False)\nloss, accuracy = model.evaluate(new_test_data)\nExplanation: Evaluate the newly retrained model with 20 training epochs.\nEnd of explanation\nspec = model_spec.get('bert_classifier')\nExplanation: Change the Model Architecture\nYou can change the model by changing the model_spec. The following shows how to change to BERT-Base model.\nChange the model_spec to BERT-Base model for the text classifier.\nEnd of explanation"}}},{"rowIdx":2977,"cells":{"Unnamed: 0":{"kind":"number","value":2977,"string":"2,977"},"text_prompt":{"kind":"string","value":"Given the following text description, write Python code to implement the functionality described below step by step\nDescription:\n Pandas 数据读写\nAPI\n读取 | 写入 \n--- | ---\nread_csv | to_csv\nread_excel | to_excel\nread_hdf | to_hdf\nread_sql | to_sql\nread_json | to_json\nread_html | to_html\nread_stata | to_stata\nread_clipboard | to_clipboard\nread_pickle | to_pickle\nCVS 文件读写\ncsv 文件内容\nwhite,read,blue,green,animal\n1,5,2,3,cat\n2,7,8,5,dog\n3,3,6,7,horse\n2,2,8,3,duck\n4,4,2,1,mouse\nStep1: 读取没有head的数据\n1,5,2,3,cat\n2,7,8,5,dog\n3,3,6,7,horse\n2,2,8,3,duck\n4,4,2,1,mouse\nStep2: 可以指定header\nStep3: 创建一个具有等级结构的DataFrame对象,可以添加index_col选项,数据文件格式\ncolors,status,item1,item2,item3\nblack,up,3,4,6\nblack,down,2,6,7\nwhite,up,5,5,5\nwhite,down,3,3,2\nred,up,2,2,2\nred,down,1,1,4\nStep4: Regexp 解析TXT文件\n使用正则表达式指定sep,来达到解析数据文件的目的。\n正则元素 | 功能\n--- | ---\n. | 换行符以外所有元素\n\\d | 数字\n\\D | 非数字\n\\s | 空白字符\n\\S | 非空白字符\n\\n | 换行符\n\\t | 制表符\n\\uxxxx | 使用十六进制表示ideaUnicode字符 \n数据文件随机以制表符和空格分隔\nwhite red blue green\n1 4 3 2\n2 4 6 7\nStep5: 读取有字母分隔的数据\n000end123aaa122\n001end125aaa144\nStep6: 读取文本文件跳过一些不必要的行\n```\nlog file\nthis file has been generate by automatic system\nwhite,red,blue,green,animal\n12-feb-2015\nStep7: 从TXT文件中读取部分数据\n只想读文件的一部分,可明确指定解析的行号,这时候用到nrows和skiprows选项,从指定的行开始和从起始行往后读多少行(norow=i)\nStep8: 实例 :\n对于一列数据,每隔两行取一个累加起来,最后把和插入到列的Series对象中\nStep9: 写入文件\nto_csv(filenmae)\nto_csv(filename,index=False,header=False)\nto_csv(filename,na_rep='NaN')\nHTML文件读写\n写入HTML文件\nStep10: 创建复杂的DataFrame\nStep11: HTML读表格\nStep12: 读写xml文件\n使用的第三方的库 lxml\nStep13: 读写Excel文件\nStep14: JSON数据\nStep15: HDF5数据\nHDF文件(hierarchical data from)等级数据格式,用二进制文件存储数据。\nStep16: pickle数据\nStep17: 数据库连接\n以sqlite3为例介绍"},"code_prompt":{"kind":"string","value":"Python Code:\nimport numpy as np\nimport pandas as pd\ncsvframe=pd.read_csv('myCSV_01.csv')\ncsvframe\n# 也可以通过read_table来读写数据\npd.read_table('myCSV_01.csv',sep=',')\nExplanation: Pandas 数据读写\nAPI\n读取 | 写入 \n--- | ---\nread_csv | to_csv\nread_excel | to_excel\nread_hdf | to_hdf\nread_sql | to_sql\nread_json | to_json\nread_html | to_html\nread_stata | to_stata\nread_clipboard | to_clipboard\nread_pickle | to_pickle\nCVS 文件读写\ncsv 文件内容\nwhite,read,blue,green,animal\n1,5,2,3,cat\n2,7,8,5,dog\n3,3,6,7,horse\n2,2,8,3,duck\n4,4,2,1,mouse\nEnd of explanation\npd.read_csv('myCSV_02.csv',header=None)\nExplanation: 读取没有head的数据\n1,5,2,3,cat\n2,7,8,5,dog\n3,3,6,7,horse\n2,2,8,3,duck\n4,4,2,1,mouse\nEnd of explanation\npd.read_csv('myCSV_02.csv',names=['white','red','blue','green','animal'])\nExplanation: 可以指定header\nEnd of explanation\npd.read_csv('myCSV_03.csv',index_col=['colors','status'])\nExplanation: 创建一个具有等级结构的DataFrame对象,可以添加index_col选项,数据文件格式\ncolors,status,item1,item2,item3\nblack,up,3,4,6\nblack,down,2,6,7\nwhite,up,5,5,5\nwhite,down,3,3,2\nred,up,2,2,2\nred,down,1,1,4\nEnd of explanation\npd.read_csv('myCSV_04.csv',sep='\\s+')\nExplanation: Regexp 解析TXT文件\n使用正则表达式指定sep,来达到解析数据文件的目的。\n正则元素 | 功能\n--- | ---\n. | 换行符以外所有元素\n\\d | 数字\n\\D | 非数字\n\\s | 空白字符\n\\S | 非空白字符\n\\n | 换行符\n\\t | 制表符\n\\uxxxx | 使用十六进制表示ideaUnicode字符 \n数据文件随机以制表符和空格分隔\nwhite red blue green\n1 4 3 2\n2 4 6 7\nEnd of explanation\npd.read_csv('myCSV_05.csv',sep='\\D*',header=None,engine='python')\nExplanation: 读取有字母分隔的数据\n000end123aaa122\n001end125aaa144\nEnd of explanation\npd.read_table('myCSV_06.csv',sep=',',skiprows=[0,1,3,6])\nExplanation: 读取文本文件跳过一些不必要的行\n```\nlog file\nthis file has been generate by automatic system\nwhite,red,blue,green,animal\n12-feb-2015:counting of animals inside the house\n1,3,5,2,cat\n2,4,8,5,dog\n13-feb-2015:counting of animals inside the house\n3,3,6,7,horse\n2,2,8,3,duck\n```\nEnd of explanation\npd.read_csv('myCSV_02.csv',skiprows=[2],nrows=3,header=None)\nExplanation: 从TXT文件中读取部分数据\n只想读文件的一部分,可明确指定解析的行号,这时候用到nrows和skiprows选项,从指定的行开始和从起始行往后读多少行(norow=i)\nEnd of explanation\nout = pd.Series()\ni=0\npieces = pd.read_csv('myCSV_01.csv',chunksize=3)\nfor piece in pieces:\n print piece\n out.set_value(i,piece['white'].sum())\n i += 1\nout\nExplanation: 实例 :\n对于一列数据,每隔两行取一个累加起来,最后把和插入到列的Series对象中\nEnd of explanation\nframe = pd.DataFrame(np.arange(4).reshape((2,2)))\nprint frame.to_html()\nExplanation: 写入文件\nto_csv(filenmae)\nto_csv(filename,index=False,header=False)\nto_csv(filename,na_rep='NaN')\nHTML文件读写\n写入HTML文件\nEnd of explanation\nframe = pd.DataFrame(np.random.random((4,4)),\n index=['white','black','red','blue'],\n columns=['up','down','left','right'])\nframe\ns = ['']\ns.append('MY DATAFRAME')\ns.append('